Code Density - Efficient but Painful?
When most people try to read math equations, a paper, some Haskell code, or even abstraction-heavy Python, they tend to gnash their teeth a bit and then whine about it being too hard to read. And some of that is fair - there is a great deal of overly complicated stuff out there.
But, in general, I've seen negative reactions to code that is simply dense, even if it's factored well.
I've long been fascinated by how perceived time differs from actual time. In particular, humans are susceptible to believing that certain things are faster than others just because they feel faster. Consider the classic example of the MacOS mouse-based UI versus keyboard shortcuts.
We've done a cool $50 million of R & D on the Apple Human Interface. We discovered, among other things, two pertinent facts:
- Test subjects consistently report that keyboarding is faster than mousing.
- The stopwatch consistently proves mousing is faster than keyboarding.
This contradiction between user-experience and reality apparently forms the basis for many user/developers' belief that the keyboard is faster.
Or consider automation versus just doing the grunt work.
Taking this analogy back to programming with abstractions, consider two ways to add one to every element in a list. One uses the map
abstraction and the other manually iterates with a for loop.
new_list = map(lambda v: v + 1, my_list)
vs.
new_list = [] for v in my_list: new_list.append(v + 1)
map
, once you understand it, immediately conveys more information than the for loop. When you see map
being used, you know a few things:
- the output list has the same length as the input list
- the input list is not modified
- each output element only depends on the corresponding input element
To build this same level of understanding given a for loop, you have to read the loop body. In trivial examples like this one, it's easy, but most loop bodies aren't so simple.
map
is common enough in programming languages that most people have no problem with it. But there are abstractions everywhere: monads, semigroups, actors, currying, coroutines, threads, sockets, closures... Each abstraction conveys some useful meaning to the programmer, often by applying some rules or restrictions.
Consider asynchronous programming with callback soup versus coroutines, tasks, or lightweight threads. When you are programming with callbacks it's very easy to forget to call a callback or call it twice. This can result in some extremely hard-to-diagnose bugs. Coroutines / tasks provide a useful guarantee: asynchronous operations will return exactly once. This abstraction comes with a cost, however: the code is more terse and indirect, and depends on your knowledge of the abstraction, just like map
above.
So right. Abstractions exist. They must be learned, but they provide some nice guarantees.
Applied in the extreme, abstractions result in extremely dense, terse code. In languages with weak support for cheap-abstraction-building, like Go, this code would have to be spelled out manually. One might look at that Haskell and exclaim "Agh, that's unreadable." But, ignoring language familiarity, consider the amount of knowledge gained per unit of mental effort and time. The algorithm has a certain amount of inherent complexity. You can either read a whole lot of "simple" lines of code or a few "hard" lines of code, but you'll build the same amount of understanding in the end.
My conjecture: People don't like reading dense code, so it feels less productive than reading a lot of fluffy code, even if it's actually faster. This is the same psychological effect as the MacOS mouse vs. keyboard shortcut feedback. I'm not aware of any comprehension speed studies across, say, Haskell and Java, but I wouldn't be surprised if people feel slower when reading Haskell but are actually faster.
Perhaps, to maximize productivity, you want to optimize for unpleasantness - the day will be "longer" so you can get more done.
Why might density be unpleasant? I'm not at all sure why papers full of equations are so intimidating compared to prose, but I have a guess. When most people read, their eyes and attention subconsciously jump around a bit. When reading less dense material, this is fine -- perhaps even beneficial. But with very dense material, each symbol or word conveys a lot of meaning, forcing the reader's eyes and attention to move much more slowly and deliberately so the brain can stay caught up. This forced slowness hurts and feels wrong, even if it actually results in quicker comprehension. Again, totally a guess. I have no idea.
Maybe it would be worth bumping up the font size when reading math or dense Haskell code?
If you have any resources or citations on this topic, send them my way!
An Aside
My blog posts are frequently misinterpreted as if I'm making broader statements than I am, so here are some things I'm NOT saying:
I'm definitely not saying code that's terse for the sake of it is always a win. For example, consider the maybe function: some people love it but is it really any clearer than just pattern matching as necessary?
Also, I am not saying Haskell is always faster to read than anything else. I'm talking about code density in general, including abstraction-heavy or functional-style Python, or a super-tight C algorithm versus fluffy OOP object soup.
I think that situation correlates to English VS Chinese. Consider the following sentence in English:
Then, in Chinese:
Although it seems like the Chinese version is shorter, but in fact, I think I took more time to read the Chinese version (note that my mother tongue is Chinese) because I have to read character by character in order to understand it, however for the English version, although it is longer, I do not really need to scan through every character, however I read word by word instead, because each word have it owns unique glyph.
In fact, the Chinese version consists of 40 tokens(aka character) while the English version consists of 34 tokens (aka words). Thus, from another point of view, that might be the reason why some people feel faster when reading verbose code. In a nutshell I would say Haskell-like language are like Chinese and Java-like language are like English. P/S: Extra foot notes, I tried your suggestion of increasing the font size, and it works, when the Chinese version is twice as large as the English version, I feel like I can even read faster than the English version.