Escape hatch examples

Last time I wrote about the idea of abstraction escape hatches. Here are some examples of good ones:

Haskell strictness annotations

By default Haskell abstracts away the evaluation order of values and evaluates lazily, i.e., only evaluates values when they need to be computed by something else. This is great in many situations because it allows you to write elegant code without worrying about performing extraneous computations. On the other hand, lazy evaluation can cause memory-usage problems, because it forces you to store a bunch of “thunks”—un-computed intermediate values—not just the final result. (For instance, sum [1, 1, 1, .... 1] would have to construct the intermediate terms 1+1, (1+1)+1, ... and hold them in memory until the final sum was requested.)

To compensate, Haskell has some quite concise ways of “forcing” (i.e. prematurely evaluating) values, based on the $! infix operator, the “BangPatterns” extension (which lets you write just !x to force-evaluate x) and the seq function. This works great since you can use it to force as little or as much of your program into strict mode as you want and it’s very easy to use (modulo sticking {# LANGUAGE BangPatterns #} at the top of your file).

The only obvious potential improvement would be if Haskell made the compiler’s strictness analysis more transparent to the user. GHC can often do “strictness analysis” to determine that thunks will eventually be evaluated, and force them before they result in a memory allocation. But it’s hard to tell when the compiler knows to force a thunk automatically and when it must be told to.

Julia JIT compiler

One language that solves the transparency problem well is the Julia JIT compiler. The compiler does all kinds of fancy optimizations during compilation, but some of them can be kind of fragile (e.g. dependent on the presence of optional type annotations creating a particular pattern that it knows how to optimize). Fortunately, for these purposes Julia exposes a bunch of different methods for viewing a method’s generated code, which my friend Leah described here. You can view everything from the abstract syntax tree of a function, to the optimized machine code for its specialization to a certain set of type arguments. This is absolutely fantastic for dropping out of the dynamically-typed abstraction for the few parts of your program that require more optimization.

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.