On Language Height and Optimization

chromatic on 2009-04-04T03:19:15

As I see it, there are two poles on a language criterion to make a given program run fast:

  • Write in a language which gives you as much control as possible over the hardware, which implies fewer opportunities for high-level abstractions (tell me with a straight face that idiomatic Ocaml code regularly worries about memory usage to the byte and CPU register usage, for example).
  • Write in a language with a sufficiently smart runtime optimizer that can remove the penalty for abstractions you don't use (and even some you do).

History suggests that the second option is almost always the right choice over time.


distributed optimization

Eric Wilhelm on 2009-04-04T07:26:22

The second option means that optimization is happening outside of your program. So, you don't have to maintain the optimization. But, It also removes a lot of your control over the optimization and takes longer to arrive.

Can we call that "postmature optimization"?

Re:distributed optimization

chromatic on 2009-04-04T17:29:15

The second option means that optimization is happening outside of your program.

So does buying faster hardware.

Can we call that "postmature optimization"?

Heh. Only when discussing mature programs!

False Dichotomy

ziggy on 2009-04-04T23:05:43

Dichotomies can be categorized into exactly two types: true dichotomies and false dichotomies. :-)

This, my friend, is a false dichotomy.

Modern haskell believes strongly in the second option. It's all about making and reusing the best abstractions, and hoping the compiler can figure it out. Most don't, ghc often does, but not always.

The new fad in the haskell community is stream fusion, or writing your code in a pipelined manner that allows a stack of maps and filters to be condensed into a single traversal over an aggregate data type. Switching out linked lists of characters in favor of C-like "bytestrings" is another popular theme.

What allows a compiler like ghc get C-like performance out of code like this is a liberal sprinkling of compiler hints and writing high level modules that do care about byte level operations.

All compilers (save one[*]) are dumb programs that need a lot of help to figure out how to make performant code. The question is where, when, how and if its exposes the ability to do bit-level and byte-level tweaks.

*: Any compiler that supports COME FROM cannot be deemed a "dumb" program. :-)

Re:False Dichotomy

chromatic on 2009-04-05T20:40:58

If the "sufficiently smart compiler" is True Amber and C is "The Courts of Chaos", GHC approaches Avalon. Now I wonder if Zelazny wrote more about programming than the Kabbalah.

(For everyone not versed in expert-but-pop SF, I agree that GHC trends toward my second pole, but it certainly hasn't arrived.)