Dropbox Hack Week: GraphQL Server in Haskell

Last week was hack week at Dropbox. I took the opportunity to explore the implementation of a GraphQL server that does optimal IO batching and concurrency.

Serial IO

A common performance problem in the implementation of web services is serial IO. Let’s say you have a service that returns the names of all of your friends. It’s easy and natural to implement it like this:

me = fetch_user_info(my_user_id)
friend_names = []
for friend_id in me.friends:
  friend_names.append(fetch_user_info(friend_id).name)
return friend_names

The problem is that each user info lookup for my friends occurs sequentially. Even with TCP connection pooling, that’s still a packet round-trip in the datacenter. “But that should be faster than 10 ms right?” Even if it is, it doesn’t take many friends to blow your performance budget and send your response time across the threshold of perceptible delay.

Moreover, this problem compounds on itself. If your inner functions have serial IO and you call them in a loop, you’ve just added potentially thousands of round-trips to your backend data sources. I would hazard a guess and say serial IO is the largest contributor to service response latencies.

Manually Batched IO

One solution is to always batch IO. Every function takes a list and returns a list. This can be made to work (indeed, I’ve achieved excellent performance by carefully, manually, batching) but doesn’t compose as your services scale. Sometimes it’s just too hard to know all of the data you will need to fetch, and the dependencies between that data.

OK, so serial IO is bad. More on that in a bit. Let’s talk about REST now.

REST

At IMVU, we went all-in on REST, building a rather substantial framework to make it easy to create REST services. We anticipated the fact that REST services tend to require many round-trips from clients, so we built a response denormalization system, where the service can anticipate the other services a client will need to hit and include their responses too.

This sounded great at first, and in fact was mostly an improvement from the prior state of affairs. But, at scale, REST has some challenging performance problems. For one, REST services have a defined schema, and they always return the data they advertise. As I mentioned, if a service doesn’t return all the data a client needs, the client needs to hit more services, increasing the number of client-to-server round-trips. In addition, because the service always returns the same set of data, it must query the backend database to fetch more data than the client even cares about.

This phenomenon tends to happen most frequently with core domain objects like users. Because users are so important, they accumulate relationships with so many pieces of data (e.g. list of subscribed experiments, contact lists, shopping carts, etc.), almost all of which is irrelevant to most clients.

Why GraphQL?

This is where GraphQL comes in. In GraphQL, the client specifies the data that it wants. There is only one request, even if the data is deeply nested, and the server only has to fetch exactly what’s needed.

Consider the query:

query HeroNameQuery {
    newhope_hero: hero(episode: NEWHOPE) {
        name
    }
    empire_hero: hero(episode: EMPIRE) {
        name
    }
    jedi_hero: hero(episode: JEDI) {
        name
    }
}

It looks up the hero of each of the first three Star Wars movies, fetches any information it needs from the backend, and returns only what is requested:

"data": {
    "HeroNameQuery": {
        "jedi_hero": {
            "name": "R2-D2"
        },
        "empire_hero": {
            "name": "Luke Skywalker"
        },
        "newhope_hero": {
            "name": "R2-D2"
        }
    }
}

There are GraphQL implementations for many languages but many of them don’t solve the serial IO problem I described to start this post. In fact, a naive GraphQL implementation might issue IO per field of each object requested.

For hack week, I wanted to explore the design of a GraphQL server that issued all of its backend IO in optimally concurrent batches.

Why Haskell?

Dropbox doesn’t use Haskell, but I find it to be a great language for exploring design spaces, particularly around execution models. Also, Facebook open sourced their excellent Haxl library which converts code written with serial IO into efficient batched requests. Haxl provides an effect type that, when it can understand that two data fetches are independent, runs them both in parallel. When all Haxl operations are blocked on backend data fetches, only then does it issue the backends. My prototype GraphQL resolvers are surprisingly naive, specified with sequential code. Haxl automatically batches up the requests and hands them to the DataSource for execution.

In addition, there is nothing clever about the GraphQL request handler or graph traversal and filtering — all cleverness is handled by Haxl.

At this point, you might be thinking “So was anything challenging about this project?” On the data fetch side, no, not really. :) However, I did run into one unexpected snag when using Haxl for data updates: because Haxl tries really hard — and in a general, composable way — to run your IO in parallel, you must be careful about how writes and reads are sequenced together. If you leave out the () <- on line 105, Haskell sequences the operations with >> instead of >>=, and Haxl’s >> uses the Applicative bind operation instead of the Monad bind operation, and thus it assumes it can run them in parallel. And, as you might expect, issuing a write and read to the same data concurrently doesn’t end well. :)

Conclusions

I am very thankful for jdnavarro’s excellent GraphQL query parser. With it, in four days, I was able to get a prototype GraphQL server up and running. Using Haxl and Hedis, I have it hooked up to a Redis data source, and it correctly batches all independent IO reads and runs them concurrently.

The Star Wars hero names query above results in two batched backend requests:

fetch star wars batch of size 3:
["FetchEpisode(Jedi)","FetchEpisode(Empire)","FetchEpisode(NewHope)"]
fetch star wars batch of size 2:
["FetchCharacter(1000)","FetchCharacter(2001)"]

You can even see that it noticed that R2-D2 is the hero of both movies, and only requested its info once.

The performance is pretty good: on some AWS instances, I measured about 3500 queries per second per machine and a query latency averaging 3 ms. Of course, that could worsen as permissions checks and so on are implemented. On the other hand, the code is completely unoptimized, full of lazy data structures and an unoptimized parser without a parser cache.

The prototype code is open sourced on the Dropbox GitHub.

It’s probably possible to build something like Haxl in Python with generators, but you’d have to give up standard loops and list comprehensions, instead using some kind of parallel map operation. You also would not benefit from anything that teases concurrency out of imperative functions like GHC’s upcoming ApplicativeDo extension. There are some things that Haskell’s restricted effects are uniquely good at. :)

I’d guess it would probably be even trickier to do a good implementation in Go given that the programmer has less control over goroutines than Python’s generators and Monads in Haskell. That said, perhaps someone will discover a clean, optimal implementation. We should pay attention to efforts like this.

I think GraphQL will be a huge deal for high-performance web services. It’s harder to implement than REST or traditional RPC services, but the reduction in response latencies could be substantial. Naive implementations aren’t going to have good performance, but if you are interested in aiming straight at the finish line, Haskell and Haxl will give you a head start.

Designing DFPL – The Name

Without hard evidence, it’s hard to say how much a language’s name matters, at least beyond basic Googleability.

Marketing has an effect on any kind of product, and programming languages are no exception.  Some names convey their different from other well-known languages, like C++ (an extension of C) or TypeScript (JavaScript but with a type system).  Java, Go, Python, and Ruby, on the other hand, are short, memorable words with neutral or positive connotations.  Bonus points for names that carry a family of associated words, e.g. Ruby and Gem.

Since our language is more of a synthesis of good ideas than a modification to an existing language, a short positive word seems to be the right strategy.

Crux is the first name we tried, though neither of us is thrilled with it.  We briefly tried the name Sneak, but everyone hated it.  Fig is short, neutral-to-positive, but nobody seemed excited by that name either.

Do you have suggestions?

The name affects the file extension, which ought to be short.  (Or am I the only one perpetually annoyed by the horizontal real estate consumed by .java and .coffee?)

Designing DFPL – The Broad Strokes

Why not Python?

I love Python. It’s easy to learn, predictable, and works well for a wide variety of programs. Python’s the first language where I regularly found myself writing programs and having them work on the first try. The problem with Python is that it breaks down at scale.

Performance

In my experience, whenever a Python codebase reaches a couple hundred thousand lines of code, performance becomes an issue. It’s well-known that Python’s straight-line perf is not even close to C or even Java. In addition, Python programs don’t make efficient use of memory: objects are large and sparse — programs consist of a lot of pointer-chasing. The garbage collector’s not great, and startup and module loading times become an issue at scale, especially for short-lived processes like unit tests, build scripts, or desktop applications.

There are Python JIT compilers, but JITs don’t really help with startup time or memory usage.

Parallelism

Python has a GIL, meaning it’s hard to write programs that make efficient use of the hardware. It’s possible to move CPU-bound work into C, but any kind of coordination between Python and C has to ping-pong across the GIL, which becomes a central bottleneck.

Launching multiple processes can help, but each process pays Python’s base process memory usage overhead, and they can’t easily share in-memory caches.

As we’ve seen from Haskell and Go, the ability to have convenient, lightweight, efficient parallelism is a big deal.

Maintainability

I talked about this in The Long-Term Problem With Dynamically Typed Languages. As codebases get huge, you really want to be able to lean on the computer in order to safely make changes. Type systems are mini proof assistants for expressing guarantees like "I never mix up binary data with textual data" or "I always check the case that optional values are empty". Unit tests help, but are "proof by example". Type systems add a meaningful layer of confidence, allowing software to scale to larger teams and faster changes.

Why not Haskell/Go/Rust/Scala/F#/Whatever?

I agree 100% with Patrick Walton here:

There is room for some statically-typed language to occupy the network-services niche where performance needs to be good—maybe 2x slower than optimal (note that this is not acceptable for all network services)—and speed of development trumps most other considerations. Some sort of hybrid of Swift, Rust, and Go, perhaps, as none of them are 100% ideal for this space.

The dynamic language space is relatively saturated at this point. Rust is doing a great job in the native machine performance space. F# and Swift and, to some degree, Scala are excellent choices for the .NET, Apple, and JVM ecosystems, respectively.

But as of yet, there’s nothing great in either the web frontend space or the application server space.

Haskell’s laziness, curried functions, and nonobvious operational semantics are not suitable for frontend code. While it’s an otherwise perfect language for writing application servers, its syntax is tragically divisive, and I have no hope that Haskell as it stands today will become mainstream.

Go’s type system somehow manages to be both verbose and not expressive enough. Go is sufficient for single-purpose servers, but, from first-hand experience, the lack of generics really hurts in application servers. (I probably owe an essay on the difference between the two types of servers.) The Go community tends to rely on idioms, but it would be better if the language was general and expressive enough to allow direction expression of things like timeouts and cancellation. Also, Go is not actually memory-safe. Go is especially frustrating because, if it had, say, generics and tuples instead of the more limited multiple-return-values, it would be a truly excellent programming language.

Rust’s concurrency is not lightweight enough, and the syntax is extremely verbose.

Scala has a significant platform dependency on the JVM. Check out the size of some Scala.js programs. Also Scala’s type system is insanely complex.

F# brings along a large .NET dependency. It also suffers from the ML syntax. (Actually, it has two syntaxes.)

Maybe Swift is great, but it was only just open sourced, and I’d be worried about the Obj-C platform dependency. I’ve yet to write any real Swift programs, but what I’ve seen looks great.

All of these languages are lovely and have their uses, but none are particularly satisfactory replacements for Go or TypeScript/JavaScript. That is, lightweight code that can run at high performance in a browser and on a concurrent, scalable server.

But, hold on, what about js_of_ocaml? This is where things get interesting. Andy used js_of_ocaml for a while and found that, on paper, Ocaml has all the right properties: it’s safe, high-performance, compiles to very concise, efficient JavaScript, expressive, concurrent, established… In fact, it’s nearly perfect… if you can stomach the syntax, that is. As I mentioned previously, at this point I’m convinced that ML-family languages haven’t caught on largely for reasons of taste (or lack thereof — just look at the js_of_ocaml website’s font selection).

The Goals

So what are we trying to do with this thing?

The language doesn’t need anything new or innovative. ML gives us how bidirectional type inference, sum types with pattern matching, and soundness. Haskell gives us overloading (i.e. type classes, i.e. type-indexed values). Go shows us how important a solid experience is, both for the first time and in daily use. Python shows us how it’s possible to connect predictable syntax and solid, composable semantics.

Thus, this is largely a human factors project. Our goal is to see if we can marry all of those things into one language.

Our goals are to have

  • Memory safety.
  • Static typing without heavyweight type annotations everywhere.
  • Language features that are composable.
  • A minimal runtime, with no large platform dependency.
  • Obvious execution semantics. Like imperative languages, you should be able to look at a program and understand its runtime behavior.
  • Tight, clean compilation to JavaScript. Should be a suitable alternative to TypeScript or CoffeeScript.
  • A reasonable replacement for Go or Node on the backend.
  • Cost-free abstractions.

Non-goals

  • C- or Rust-like performance. Rust is already doing a great job there.
  • Support for restricted effects. It’s not clear the usability cost is worth it.

Next time, I’ll either dive into one of the specific design decisions we’ve made, or talk about the language’s overall gestalt or texture.

CRTs, Pixels, and Video Games

A collection of links about how CRTs rendered old video games differently than the harsh pixelation we see today in emulators and retro-style games:

Designing a Delightful Functional Programming Language

ML and Haskell have amazingly powerful type systems. Their type systems, and to some degree syntax, have widely influenced the modern crop of languages. Both can achieve machine performance. Haskell has fantastic concurrency. Both have concise, boilerplate-free syntax — some would say too concise. Both have the ability to build abstractions without impacting performance.

So why aren’t there more Haskell or ML programmers? Like, it’s fantastic that Rust and Swift borrowed some of their ideas, but why did it take two decades? Why do things like CoffeeScript and Go spread like wildfire instead? Go is especially bizarre because it’s not really very different from Java in many ways, and a lot worse in others.

I’m sure there are many reasons. Tooling is important. Marketing is important. Haskell is (or perhaps just was) largely a research language. The first-time user experience is important. I don’t claim to know every reason why some languages spread and others don’t, but I know at least one thing: syntax matters a whole lot more than we’d like to admit.

Consider all of the bracing and spacing and indentation arguments people have about C.

Remember that brilliant people have whined about the fact that Python uses whitespace to indicate nesting, as if that materially affects how much work they can get done.

Hell, CoffeeScript gained significant popularity by adding nothing but syntax to JavaScript. There were many opportunities to add actual features to CoffeeScript, like lightweight coroutine syntax, but CoffeeScript’s motto was “It’s just JavaScript.” Its popularity is based on the strength of the syntax alone.

There are two ways to interpret this phenomenon. One is “stupid humans and their quirks and biases, if only the uneducated masses would see the light and focus on what really matters”. Another is the realization that programming is a deeply human activity. A programming language lets humans take their human thoughts and map them into something a computer can execute. The aesthetics of that activity are important.

If a language is going to get adopted, at least without significant corporate backing, it needs to appeal to peoples’ tastes.

Back to Haskell and ML. They have their proponents, but it’s only until now that they’re starting to dance with the mainstream. It’s a shame, because software would be in a much better place if all the material benefits of ML and Haskell (bidirectional type inference, cheap and safe concurrency, sum types) had spread faster.

I’ll share some relevant experiences.

My personal OCaml experience occurred in university. One day I decided to play with this OCaml thing. My rationale is perhaps embarrassing, but I quit for two reasons: the addition operator for integers is + but for floats it’s +.. The other thing was the pervasive use of ;;, at least in the repl. Honestly, it felt tasteless, so I dropped it and moved on, never getting to see the real benefits (polymorphism, type inference, modules, sum types).

Now, Haskell. I was part of the effort to get Haskell adopted at IMVU. The benefits of using Haskell were simply too great to ignore. But when we tried to get everyone to learn it, a nontrivial fraction of engineers developed a visceral aversion. “I understand all the benefits but… I just hate it.” “Why do you hate it?” “The whitespace, $, the two let forms, the weird function names… I dunno, it’s just ugly and I hate it.”

It made me deeply sad that these seemingly superficial complaints prevented what should have been a slam dunk, at least when it came to safety and performance and maintainability and development speed.

When I joined Dropbox, I had some conversations with teammates about the benefits of something like Haskell (most people either don’t know about it or think it’s some academic obscurity), and I got a serious negative reaction to the idea of static types. “They’re heavyweight.” “They’re hard to learn.” “They hinder refactoring.” “They hurt development speed.”

None of these are necessarily factual. After I poked and prodded, I figured out that the resistance to the idea of static typing comes from prior exposure to things like C++ or Java (or Go), which have especially verbose syntaxes and type systems. This is similar to what happens with functional programming, where a common reaction to the idea is “Ugh, I hated my lisp class in college.”

These realizations led Andy and I to take on a project: design a programming language with all the expressive power and semantic correctness of an ML, yet with a syntax and aesthetic that would be instantly recognizable to your typical Go or JavaScript programmer. Surely it’s achievable. This is largely a human factors project – while the PL theory is important, we do not intend to significantly innovate there. Instead, the main innovation is applying familiarity and usability to a domain with the opposite reputation.

In subsequent posts, I will describe some of DFPL’s design decisions.

PL Usability: Name Fewer Things

If you follow me on Twitter or know me in person, you know I’ve been somewhat obsessed with programming languages lately. Why? In my career I’ve written high-performance desktop software, embedded controllers, 3D engines, scalable and low-latency web stacks, frontend JavaScript stacks… and at this stage in my life I’m tired of dealing with stupid human mistakes like null pointer exceptions, uninitialized data, unhandled cases, accidentally passing a customer ID where you meant an experiment ID, accidentally running some nontransactional code inside of a transaction, specifying timeouts with milliseconds when the function expects seconds…

All of these problems are solvable with type systems. Having learned and written a pile of Haskell at IMVU, I’m now perpetually disappointed with other languages. Haskell has a beautiful mix of a powerful type system, bidirectional type inference, and concise syntax, but it also has a reputation for being notoriously hard to teach.

In fact, getting Haskell adopted in the broader IMVU engineering organization was a significant challenge, with respect to both evangelization (why is this important) and education (how do we use it).

Why is that? If Haskell is so great, what’s the problem? Well, there are a few reasons, but I personally interviewed nearly a dozen IMVU engineers to get their thoughts about Haskell and a common theme arose again and again:

There are too many NAMES in Haskell. String, ByteString, Text. Functor, Applicative, Monad. map, fmap. foldl, foldl’, foldr. Pure let … in syntax vs. monadic let syntax. Words words words.

Some people are totally okay with this. They have no problem reading reference documentation, memorizing and assigning precise meaning to every name. This is just a hypothesis, but I wonder whether mathematically-oriented people tend to fall into this category. Either way, it’s definitely true that some people tolerate a proliferation of names less than others.

As someone who can’t remember anything, I naturally stumbled into a guiding principle for API design: express concepts with a minimal set of names. This necessitates that the concepts be composable.

And now I’m going to pick on a few specific examples.

Maybe

My first example is Haskell’s maybe function.

First of all, what kind of name is that. I bet that, if you didn’t know what it did, you wouldn’t be able to guess. (A classic usability metric is how accurately things can be guessed.)

Second of all, why? Haskell already has pattern matches. You don’t need a tiny special function to pattern match Maybes. Compare, for some m :: Maybe Int:

maybe (return ()) (putStrLn . show) m

with:

case m of
  Just i -> putStrLn (show i)
  Nothing -> return ()

Sure, the latter is a bit more verbose, but I’d posit the explicit pattern match is almost always more readable in context. Why? Because maybe is yet another word to memorize. You need to know the order of its arguments and its meaning. The standard library would be no weaker without this function. Multiple experienced Haskellers have told me to prefer it over explicit pattern matches, but it doesn’t optimize for readability over the long term.

Pattern matches are great! They always work the same way (predictability!), and they’re quite explicit about what’s going on.

pathwalk

For my second example, the last thing I want to do is pick on this contributor, because the use case was valid and the pull request meaningful. (In the end, we found another way to provide the same functionality, so the contributor did add value to the project.)

But what I want to show is why I rejected this pull request. pathwalk was intended to be a small, light, learnable API that anyone could drop into a project. It exposes one function, pathWalk, with three variants, one for early exit, one for accumulation of a result, and one that uses lazy IO for convenience. All of these are intended to be, if not memorizable, quite predictable. The pull request, on the other hand, added a bunch of variations with nonobvious names, as well as six different callback types. For an idea as simple as directory traversal, introducing a pile of exported symbols would have a negative impact on usability. Thus, to keep the library’s interface small and memorizable, I rejected the PR.

TypeScript

Now I’ll discuss the situation that led me to writing this post. My team at Dropbox recently adopted TypeScript to prevent a category of common mistakes. For our own sanity, we were already annotating parameters and return types — now we’re simply letting the computer verify correctness for us.

However, as we started to fill in type annotations for the program, a coworker of mine expressed some concern. "We shouldn’t write so many type definitions in this frontend code. JavaScript is a scripting language, not heavyweight like Java." That argument didn’t make sense to me. Surely it doesn’t matter whether we’re writing high-performance backend code or frontend code — type systems satisfy the same need in either situation. And all I was doing was annotating the parsed return type of a service request to prevent mistakes. The specific code looked something like this:

// at the top of the file
interface ParsedResponse {
  users: UserModel[];
  payloads: PayloadModel[];
}
 
// ...
 
// way down in the file
function makeServiceRequest(request: Request): Promise<ParsedResponse> {
  return new Promise((resolve, reject) => {
    // ...
    resolve({
      users: parsedUserModels,
      payloads: parsedPayloadModels,
    });
  });
}

At first I could not understand my coworker’s reaction. But I realized my coworker’s concerns weren’t about the types — after all, he would have documented the fields of the response in comments anyway, and, all else equal, who doesn’t want compiler verification that their code accesses valid properties?

What made him react so negatively was that we needed to come up with a name for this parsed response thing. Moreover, the definition of the response type was located far away from the function that used it. This adds some mental overhead. It’s another thing to look at and remember.

Fortunately, TypeScript uses structural typing (as opposed to nominal typing), and the advantage of structural typing is types are compatible if the fields line up, so you can avoid naming intermediate types. Thus, if we change the code to:

function makeServiceRequest(request: Request): Promise<{
  users: UserModel[],
  payloads: PayloadModel[],
}> {
  return new Promise((resolve, reject) => {
    // ...
    resolve({
      users: parsedUserModels,
      payloads: parsedPayloadModels,
    });
  });
}

Now we can have type safety as well as the concision of a dynamic language. (And if we had bidirectional type inference, it would be even better. Alas, bidirectional type inference and methods are at odds with each other.)

Other Examples

git has the concept of an "index" where changes can be placed in preparation for an upcoming commit. However, the terminology around the index is inconsistent. Sometimes it’s referred to as the "cache", sometimes the "index", sometimes the "stage". I get it, "stage" is supposed to be the verb and "index" is the noun, but how does that help me remember which option I need to pass to git diff? It doesn’t help that both stage and index are both nouns and verbs.

In the early 2000s I wrote a sound library which was praised for how easy it was to get up and running relative to other libraries at the time. I suspect some of this was due to needing to know only a couple concepts to play sounds or music. FMOD, for on the other hand, required that you learn about devices and mixing channels and buffers. Now, all of these are useful concepts, and every serious sound programmer will have to deeply understand them in the end. But from the perspective of someone coming in fresh, they just want to play a sound. They don’t want to spend time studying a bunch of foundational concepts. Audiere had the deeper concepts too, but it was useful to provide a simpler API for the simple 80% use case.

Python largely gets this right. If you want to know the size of some x, for any x, you simply write len(x). You don’t need a different function for each data type.

To summarize, whenever you provide an API or language of some kind, think hard about the names of the ideas, and whether you can eliminate any. It will improve the usability of your system.

Features All Test Frameworks Should Have

EDIT 2015-11-02: I added a couple more nice-to-haves which are I think are pretty important. See the end of the list.

I’ve used half a dozen unit testing frameworks, and written nearly that many more. Here is the set of features that I consider a requirement in any test framework.

You’d think some of the following requirements are so obvious as to not need mentioning… I, too, have been shocked before. :)

Must-Haves

  • Minimal boilerplate. Writing tests should be frictionless, so setting up a new test file should be little more than a single import and maybe a top-level function call.

  • Similarly, each test name should only have to be uttered once. Some frameworks have you, after writing all of your tests, enumerate them in a test list. I’m even aware of a Haskell project that repeated each test name THREE times: once for the type signature, once for the test itself, and once for the test list.

  • Assertion failures should provide a stack trace, including filename and line number. There are test frameworks that make you name each assertion rather than giving a file and line number. Never make a human do what a computer can. :)

  • Assertion failures should also include any relevant values. For example, when asserting that x and y are the equal, if they are not, the values of x and y should be shown.

  • Test file names and cases should be autodiscovered. It’s too easy to accidentally not run a bunch of tests because you forgot to register them.

  • The framework should support test fixtures — that is, common setup and teardown code per test. In addition, and this is commonly missed, fixtures should be nestable: test setup code should run from the most base fixture to the most derived fixture, then all the teardown code should run in the reverse order. Nested fixtures allow reusing common environments across many tests. The BDD frameworks tend to support that because nested contexts are one of their selling points.

  • It should be possible to define what I call “superfixtures”: code that runs before and after each test, whether or not the test specifies a fixture or not. This is useful for making general assertions across the code base (or regions thereof), such as “no test leaks memory”.

  • Support for abstract test cases. Abstract tests let you define a set of N tests that each operate on an interface and a set of M fixtures, each providing an implementation of that interface. This runs M*N tests total. This makes it easy to test that a bunch of implementations all expose the same behavior.

  • A rich set of comparison operators. For example, equality, identify, membership, and string matching. This allows tests to provide more context upon failure, but also makes it easy for programmers to write good appropriate and concise tests in the first place. (Bonus points: there are frameworks like py.test that have a single assertion form, but examine the assertion expression to automatically print any relevant context upon failure.)

  • Printing to stdout should be unbuffered and interleaved properly with
    the test reporter’s output. I only include this because Tasty utterly fails this test. :)

is same as not caching:                OK (1.45s)He[lmo! [ 3T2h;i2s2 mis
 a [vmery[ 3i7n;n2o2cmen t   p u+t+S+t rOLKn,.  p aIs sheodp e1 0i0t  tdeosetssn.'
t a[fmfe c tt etshte  etxetsetr noault pmuetm.o
ize happy path:      OK

Nice-to-Haves

  • Customizable test reporting. There are two reasons. The first, colored test output, is a nice-to-have, but it’s a huge one, as it probably shaves a few seconds off of each visual scan of the test results. Also, integrating test output with continuous integration software is a big win too.

  • Parallelism. The built-in ability to run tests in parallel is a nice way to reduce testing turnaround time. Either opt-in or opt-out parallelism are okay. But, if necessary, it’s easy to work around the lack of parallelism and make efficient use of test hardware by dividing up the tests into even slices or chunks and running them on multiple machines or VMs.

  • Property-based testing, a la QuickCheck. While QuickCheck is amazing, and property-based testing will change your life, the bread and butter of your test suite will be unit tests.

  • Direct, convenient support for disabling tests. Without this capability, people just comment out the test, but commented-out tests don’t show up in the test metrics, so they tend to get forgotten. Jasmine handles this very well: simply prefix the disabled fixture or test with “x”. As in, if a test is spelled it('should return 2', function() { ... }), disabling it as easy as changing it to xit.

I could build the feature matrix across the test frameworks I’ve used, but only a handful are complete out of the box. (If anyone would like me to take a crack at filling out a feature matrix, let me know.)

The Python unit testing ecosystem is pretty great. Even the built-in unittest package has almost every feature. (I believe I had to manually extend TestCase to provide support for superfixtures.) The JavaScript world, until recently, was pretty anemic. QUnit was a wreck, last time I used it — there is no excuse for not including stack traces in test failures. Jasmine, on the other hand, supports almost everything I care about. (At IMVU, we ended up building imvujstest, part of imvujs.)

In the C++ world, UnitTest++ comes very close to being great. The only capabilities I’ve had to add were superfixtures, nested fixtures, and abstract test cases. In hindsight, I wish I’d open sourced that bit of C++ macros and templates while I could have. :)

go test by itself is way too simplistic to be used for a sophisticated test suite. Fortunately, the gocheck package is pretty good. It’s possible to make abstract tests work in gocheck, at the cost of some boilerplate. However, today, gocheck doesn’t support nested fixtures. I suspect they’d be amenable to a patch if anyone wants to take that on.

The Haskell unit testing ecosystem is less than ideal. Getting a proper framework that satisfies the above requirements takes considerably more effort than the other examples I’ve given. Everything I’ve described is possible with HUnit and various Template Haskell packages, but it takes quite a lot of package dependencies and language extensions. I have dreams of building my ideal Haskell unit test framework… perhaps the next time I work on a large Haskell project.

If you’re building a test framework, the most important thing to focus on is a rapid iteration flow: write a test, watch it fail, modify the code, watch the test pass. It should be easy for anyone to write a test, and easy for anyone to run them and interpret their output. The faster you can iterate, the more your mind stays focused on the actual problems at hand.

EDIT: More Nice-to-Haves

  • Copy and paste test names back into runner. It’s pretty common to want to run a single test again. The easiest way to support this is to allow the exact test name to be passed as a command line argument to the runner. Test frameworks that automatically strip underscores from test names or that output fixture names in some funky format automatically fail this. BDD frameworks fail this too because of their weird english-ish test name structure.
  • Test times. Tests that take longer than, say, one millisecond should have their running times output with the test result. Test times always creep up over time so it’s important to keep this visible.

Haskell is Not a Purely Functional Language

Haskell has a strange reputation. Some of the best programmers I've known, coming to Haskell, have made ridiculous assumptions like "It must be pretty hard to write real programs in Haskell, given you can't have side effects." Of course that statement can't be true — there are many real programs written in Haskell, and a program without a side effect wouldn't be worth running, right?

Everyone describes Haskell as purely-functional. Even the haskell.org front page uses the phrase "purely-functional" within the first ten words. I argue that purely-functional, while in some ways accurate, gives the wrong impression about Haskell.

I propose instead that, for most programmers, it's better to think of Haskell as a language with restricted effects.

Out of the box, Haskell has two effect contexts, one at each extreme: pure functions and IO. Pure functions cannot[1] have side effects at all. Their outputs depend solely on their inputs; thus, they can be evaluated in any order[2] (or multiple times or never at all). IO actions, however, can have arbitrary side effects. Thus, they must be sequenced and run in order.

IO is unrestricted, but usefully restricted subsets of IO can be defined atop it. Consider software transactional memory. One reason STM works so well in Haskell compared to other languages is because it does not need to track reads and writes to all memory: only those inside of the STM execution context. STM is a restricted execution context – if arbitrary side effects were allowed in a transaction, then they might be run zero or multiple times, which would be unsafe. But there's another benefit of this restricted context: it means that transactions only need track reads and writes to STM variables like TVars. Independent memory reads and writes, such as those that occur as part of the evaluation of pure functions, can be completely ignored — pure functions can have no side effects, so it's fine to reevaluate them as needed. This is why Haskell has some of the best STM support, as explained in Haskell is useless.

Restricted effect contexts are powerful. IMVU uses them to guarantee unit tests are deterministic. They also let you guarantee that, for example, a Redis transaction cannot issue a MySQL query or write to a file, because the Redis execution engine may arbitrarily retry or abort the transaction.

OK, so Haskell has restricted effects. Restricted effects are a useful, though quite uncommon, language feature.

Should it be called a "restricted effects language"? No, I don't think so. It is totally possible to write all of your code in IO with imperative variable updates and mutable data structures, just like you would in Python or C. It's also fine to write all of your code with pure functions invoked from a minimal main function. That is, Haskell supports many styles of programming.

One word or phrase is simply not sufficient to describe anything meaningfully complex, such as a programming language. Languages consist of many features.

In one of my favorite papers, Teaching Programming Languages in a Post-Linnean Age, Shriram Krishnamurthi argues that programming language paradigms are outdated — languages are designed with interacting sets of features and should be evaluated as such.

Programming language “paradigms” are a moribund and tedious legacy of a bygone age. Modern language designers pay them no respect, so why do our courses slavishly adhere to them? This paper argues that we should abandon this method of teaching languages, offers an alternative, reconciles an important split in programming language education, and describes a textbook that explores these matters.

The problem with using paradigms to describe a language is that they come across as exclusionary. "This language is functional, not imperative." "This language is object-oriented, not functional." However, almost all languages support a wide range of programming styles.

So let's compare a few languages broken down into a mix of individual language features:

Java Haskell Go Rust C Python
Memory safety Yes Yes Kinda[3] Yes No Yes
Garbage collection Yes Yes Yes No No Yes
Precise control over memory layout No No No Yes Yes No
Concurrency model 1:1 M:N M:N 1:1 1:1 GIL
Obvious translation to machine code No No Somewhat Yes Yes No
Generic programming[4] Yes Yes Kinda, via interface{} Yes No Yes
Mutable data structures Yes Yes Yes Yes Yes Yes
Higher-order functions Kinda[5] Yes Yes Yes No Yes
Sum types No Yes No Yes No No
Runtime eval Kinda[6] No No No No Yes
Type inference Limited Yes Limited Limited No No
Tail calls No Yes No No No No
Exceptions Yes Yes Kinda[7] No No Yes
"Goroutines" e.g. user-threads No[8] Yes Yes No No Yes
Function overloading Yes Yes No No No No
Costless abstractions Depends on JIT Yes No Yes Kinda [Macros] No

The chart above is pretty handwavy, and we could quibble on the details, but my point is that languages should be compared using feature sets rather than paradigms. With feature sets it's possible to see which languages occupy similar points in the overall language design space. Haskell and Rust have related type systems, and Haskell and Java and Go have similar runtime semantics. It should be clear, then, that there is no taxonomy as simple as "functional", "imperative", or "object-oriented" that accurately classifies the differences between these languages.

(An aside: Whether a language supports a feature is often fuzzy, especially because some features can be broken apart into several aspects. Also, you can sometimes approximate one feature with others. For example, since Go doesn't support sum types, you can use type switches instead, at the cost of exhaustiveness checks and efficiency[9].)

"Purely-functional" is a description so generic that it doesn't fully convey what is interesting about Haskell. While Haskell has great support for functional programming, it has uniquely-powerful support for imperative programming too.

I recommend avoiding using the terms "functional language", "procedural language", "imperative language", "object-oriented language", and whatever other paradigms that have been popularized over the years.

Thanks to Imran Hameed for his feedback and suggestions. Of course, however, any mistakes are mine.

[1] Except via unsafe, user-beware mechanisms like unsafePerformIO and unsafeDupablePerformIO.

[2] I'm hand-waving a bit. If evaluating a value throws an exception, the exception must not be thrown unless the value is needed.

[3] Go is memory-safe except when one thread writes to an interface variable or slice and another reads from it.

[4] By generic programming I mean the ability to conveniently express, in the language, a computation that can operate on many different types of input. Any dynamic language will have this property because all values inhabit the same type. Typed languages with generic type systems also have this property. Go doesn't so you must use interface{} with explicit casts or code generation to implement generic algorithms.

[5] JVM doesn't really support higher-order functions

[6] In the JVM, code can be loaded dynamically with a ClassLoader. But runtime implementations don't generally ship with a Java language compiler.

[7] Go has panic and recover which are roughly equivalent to try…catch but with manual exception filtering and rebubbling.

[8] Java has a user threading mechanism implemented with bytecode translation in a library called Quasar.

[9] A direct encoding for sum types can be unboxed, smaller than a word, and only require one indirection to access the payload. The equivalent Go using type switches and interfaces would use two words for the interface pointer and an additional indirection.

Code Density – Efficient but Painful?

When most people try to read math equations, a paper, some Haskell code, or even abstraction-heavy Python, they tend to gnash their teeth a bit and then whine about it being too hard to read. And some of that is fair – there is a great deal of overly complicated stuff out there.

But, in general, I’ve seen negative reactions to code that is simply dense, even if it’s factored well.

I’ve long been fascinated by how perceived time differs from actual time. In particular, humans are susceptible to believing that certain things are faster than others just because they feel faster. Consider the classic example of the MacOS mouse-based UI versus keyboard shortcuts.

We’ve done a cool $50 million of R & D on the Apple Human Interface. We discovered, among other things, two pertinent facts:

  • Test subjects consistently report that keyboarding is faster than mousing.
  • The stopwatch consistently proves mousing is faster than keyboarding.

This contradiction between user-experience and reality apparently forms the basis for many user/developers’ belief that the keyboard is faster.

Or consider automation versus just doing the grunt work.

Taking this analogy back to programming with abstractions, consider two ways to add one to every element in a list. One uses the map abstraction and the other manually iterates with a for loop.

new_list = map(lambda v: v + 1, my_list)

vs.

new_list = []
for v in my_list:
    new_list.append(v + 1)

map, once you understand it, immediately conveys more information than the for loop. When you see map being used, you know a few things:

  1. the output list has the same length as the input list
  2. the input list is not modified
  3. each output element only depends on the corresponding input element

To build this same level of understanding given a for loop, you have to read the loop body. In trivial examples like this one, it’s easy, but most loop bodies aren’t so simple.

map is common enough in programming languages that most people have no problem with it. But there are abstractions everywhere: monads, semigroups, actors, currying, coroutines, threads, sockets, closures… Each abstraction conveys some useful meaning to the programmer, often by applying some rules or restrictions.

Consider asynchronous programming with callback soup versus coroutines, tasks, or lightweight threads. When you are programming with callbacks it’s very easy to forget to call a callback or call it twice. This can result in some extremely hard-to-diagnose bugs. Coroutines / tasks provide a useful guarantee: asynchronous operations will return exactly once. This abstraction comes with a cost, however: the code is more terse and indirect, and depends on your knowledge of the abstraction, just like map above.

So right. Abstractions exist. They must be learned, but they provide some nice guarantees.

Applied in the extreme, abstractions result in extremely dense, terse code. In languages with weak support for cheap-abstraction-building, like Go, this code would have to be spelled out manually. One might look at that Haskell and exclaim "Agh, that’s unreadable." But, ignoring language familiarity, consider the amount of knowledge gained per unit of mental effort and time. The algorithm has a certain amount of inherent complexity. You can either read a whole lot of "simple" lines of code or a few "hard" lines of code, but you’ll build the same amount of understanding in the end.

My conjecture: People don’t like reading dense code, so it feels less productive than reading a lot of fluffy code, even if it’s actually faster. This is the same psychological effect as the MacOS mouse vs. keyboard shortcut feedback. I’m not aware of any comprehension speed studies across, say, Haskell and Java, but I wouldn’t be surprised if people feel slower when reading Haskell but are actually faster.

Perhaps, to maximize productivity, you want to optimize for unpleasantness – the day will be "longer" so you can get more done.

Why might density be unpleasant? I’m not at all sure why papers full of equations are so intimidating compared to prose, but I have a guess. When most people read, their eyes and attention subconsciously jump around a bit. When reading less dense material, this is fine — perhaps even beneficial. But with very dense material, each symbol or word conveys a lot of meaning, forcing the reader’s eyes and attention to move much more slowly and deliberately so the brain can stay caught up. This forced slowness hurts and feels wrong, even if it actually results in quicker comprehension. Again, totally a guess. I have no idea.

Maybe it would be worth bumping up the font size when reading math or dense Haskell code?

If you have any resources or citations on this topic, send them my way!

An Aside

My blog posts are frequently misinterpreted as if I’m making broader statements than I am, so here are some things I’m NOT saying:

I’m definitely not saying code that’s terse for the sake of it is always a win. For example, consider the maybe function: some people love it but is it really any clearer than just pattern matching as necessary?

Also, I am not saying Haskell is always faster to read than anything else. I’m talking about code density in general, including abstraction-heavy or functional-style Python, or a super-tight C algorithm versus fluffy OOP object soup.

Sum Types Are Coming: What You Should Know

Just as all mainstream languages now have lambda functions, I predict sum types are the next construct to spread outward from the typed functional programming community to the mainstream. Sum types are very useful, and after living with them in Haskell for a while, I miss them deeply when using languages without them.

Fortunately, sum types seem to be catching on: both Rust and Swift have them, and it sounds like TypeScript’s developers are at least open to the idea.

I am writing this article because, while sum types are conceptually simple, most programmers I know don’t have hands-on experience with them don’t have a good sense of their usefulness.

In this article, I’ll explain what sum types are, how they’re typically represented, and why they’re useful. I will also dispel some common misconceptions that cause people to argue sum types aren’t necessary.

What is a Sum Type?

Sum types can be explained a couple ways. First, I’ll compare them to product types, which are extremely familiar to all programmers. Then I’ll show how sum types look (unsafely) implemented in C.

Every language has product types – tuples and structs or records. They are called product types because they’re analogous to the cartesian products of sets. That is, int * float is the set of pairs of values (int, float). Each pair contains an int AND a float.

If product types correspond to AND, sum types correspond to OR. A sum type indicates a value that is either X or Y or Z or …

Let’s take a look at enumerations and unions and show how sum types are a safe generalization of the two. Sum types are a more general form of enumerations. Consider C enums:

enum Quality {
  LOW,
  MEDIUM,
  HIGH
};

A value of type Quality must be one of LOW, MEDIUM, or HIGH (excepting uninitialized data or unsafe casts — we are talking about C of course). C also has a union language feature:

union Event {
  struct ClickEvent ce;
  struct PaintEvent pe;
};

ClickEvent and PaintEvent share the same storage location in the union, so if you write to one, you ought not read from the other. Depending on the version of the C or C++ specification, the memory will either alias or your program will have undefined behavior. Either way, at any point in time, it’s only legal to read from one of the components of a union.

A sum type, sometimes called a discriminated union or tagged variant, is a combination of a tag (like an enum) and a payload per possibility (like a union).

In C, to implement a kind of sum type, you could write something like:

enum EventType {
  CLICK,
  PAINT
};

struct ClickEvent {
  int x, y;
};
struct PaintEvent {
  Color color;
};

struct Event {
  enum EventType type;
  union {
    struct ClickEvent click;
    struct PaintEvent paint;
  };
};

The type field indicates which event struct is legal to access. Usage looks as follows:

// given some Event event
switch (event.type) {
  case CLICK:
    handleClickEvent(&event.click);
    break;
  case PAINT:
    handlePaintEvent(&event.paint);
    break;
  // most compilers will let us know if we didn't handle every event type
}

However, there is some risk here. Nothing prevents code from accessing .paint in the case that type is CLICK. At all times, every possible field in event is visible to the programmer.

A sum type is a safe formalization of this idea.

Sum Types are Safe

Languages like ML, Haskell, F#, Scala, Rust, Swift, and Ada provide direct support for sum types. I’m going to give examples in Haskell because the Haskell syntax is simple and clear. In Haskell, our Event type would look like this:

data Event = ClickEvent Int Int
           | PaintEvent Color

That syntax can be read as follows: there is a data type Event that contains two cases: it is either a ClickEvent containing two Ints or a PaintEvent containing a Color.

A value of type Event contains two parts: a small tag describing whether it’s a ClickEvent or PaintEvent followed by a payload that depends on the specific case. If it’s a ClickEvent, the payload is two integers. If it’s a PaintEvent, the payload is one color. The physical representation in memory would look something like [CLICK_EVENT_TAG][Int][Int] or [PAINT_EVENT_TAG][Color], much like our C code above. Some languages can even store the tag in the bottom bits of the pointer, which is even more efficient.

Now, to see what type of Event a value contains, and to read the event’s contents, you must pattern match on it.

-- given some event :: Event
case event of
  ClickEvent x y -> handleClickEvent x y
  PaintEvent color -> handlePaintEvent color

Sum types, paired with pattern matching, provide nice safety guarantees. You cannot read x out of an event without first verifying that it’s a ClickEvent. You cannot read color without verifying it’s a PaintEvent. Moreover, the color value is only in scope when the event is known to be a PaintEvent.

Sum Types are General

We’ve already discussed how sum types are more general than simple C-style enumerations. In fact, in a simple enumeration, since none of the options have payloads, the sum type can be represented as a single integer in memory. The following DayOfWeek type, for example, can be represented as efficiently as the corresponding C enum would.

data DayOfWeek = Sunday | Monday | Tuesday | Wednesday | Thursday | Friday | Saturday

Sum types can also be used to create nullable data types like C pointers or Java references. Consider F#’s option, or Rust’s Option, or Haskell’s Maybe:

data Maybe a = Nothing | Just a

(a is a generic type variable – that is, you can have a Maybe Int or Maybe Customer or Maybe (Maybe String)).

Appropriate use of Maybe comes naturally to programmers coming from Java or Python or Objective C — it’s just like using NULL or None or nil instead of an object reference except that the type signature of a data type or function indicates whether a value is optional or not.

When nullable references are replaced by explicit Maybe or Option, you no longer have to worry about NullPointerExceptions, NullReferenceExceptions, and the like. The type system enforces that required values exist and that optional values are safely pattern-matched before they can be dereferenced.

Imagine writing some code that closes whatever window has focus. The C++ would look something like this:

Window* window = get_focused_window();
window->close_window();

Oops! What if there is no focused window? Boom. The fix:

Window* window = get_focused_window();
if (window) {
  window->close_window();
}

But then someone could accidentally call a different method on window outside of the if statement… reintroducing the problem. To help mitigate this possibility, C++ does allow introducing names inside of a conditional:

if (Window* window = get_focused_window()) {
  window->close_window();

Pattern matching on Maybe avoids this problem entirely. There’s no way to even call close_window unless an actual window is returned, and the variable w is never bound unless there is an actual focused window:

window <- get_focused_window
case window of p
  Nothing -> return ()
  Just w -> close_window w

This is a big win for correctness and clarity.

Thinking in Sum Types

Once you live with sum types for a while, they change the way you think. People coming from languages like Python or Java (myself included) to Haskell immediately gravitate towards tuples and Maybe since they’re familiar. But once you become accustomed to sum types, they subtly shift how you think about the shape of your data.

I’ll share a specific memorable example. At IMVU we built a Haskell URL library and we wanted to represent the "head" of a URL, which includes the optional scheme, host, username, password, and port. Everything before the path, really. This data structure has at least one important invariant: it is illegal to have a scheme with no host. But it is possible to have a host with no scheme in the case of protocol-relative URLs.

At first I structured UrlHead roughly like this:

type SchemeAndHost = (Maybe Scheme, Host)
type UrlHead = Maybe (Maybe SchemeAndHost)
-- to provide some context, the compete URL type follows
type Url = (UrlHead, [PathSegment], QueryString)

In hindsight, the structure is pretty ridiculous and hard to follow. But the idea is that, if the URL head is Nothing, then the URL is relative. If it’s Just Nothing, then the path is treated as absolute. If it’s Just (Just (Nothing, host)), then it’s a protocol-relative URL. Otherwise it’s a fully-qualified URL, and the head contains both a scheme and a host.

However, after I started to grok sum types, a new structure emerged:

data UrlHead
  = FullyQualified ByteString UrlHost
  | ProtocolRelative UrlHost
  | Absolute
  | Relative

Now the cases are much clearer. And they have explicit names and appropriate payloads!

Sum Types You Already Know

There are several sum types that every programmer has already deeply internalized. They’ve internalized them so deeply that they no longer think about the general concept. For example, "This variable either contains a valid reference to class T or it is null." We’ve already discussed optional types in depth.

Another common example is when functions can return failure conditions. A fallible function either returns a value or it returns some kind of error. In Haskell, this is usually represented with Either, where the error is on the Left. Similarly, Rust uses the Result type. It’s relatively rare, but I’ve seen Python functions that either return a value or an object that derives from Exception. In C++, functions that need to return more error information will usually return the error status by value and, in the case of success, copy the result into an out parameter.

Obviously languages with exceptions can throw an exception, but exceptions aren’t a general error-handling solution for the following reasons:

  1. What if you temporarily want to store the result or error? Perhaps in a cache or promise or async job queue. Using sum types allows sidestepping the complicated issue of exception transferability.
  2. Some languages either don’t have exceptions or limit where they can be thrown or caught.
  3. If used frequently, exceptions generally have worse performance than simple return values.

I’m not saying exceptions are good or bad – just that they shouldn’t be used as an argument for why sum types aren’t important. :)

Another sum type many people are familiar with is values in dynamic languages like JavaScript. A JavaScript value is one of many things: either undefined or null or a boolean or a number or an object or… In Haskell, the JavaScript value type would be defined approximately as such:

data JSValue = Undefined
             | Null
             | JSBool Bool
             | JSNumber Double
             | JSString Text
             | JSObject (HashMap Text JSValue)

I say approximately because, for the sake of clarity, I left out all the other junk that goes on JSObject too. ;) Like whether it’s an array, its prototype, and so on.

JSON is a little simpler to define:

data JSONValue = Null
               | True
               | False
               | String Text
               | Number Double
               | Array [JSONValue]
               | Object (HashMap Text JSONValue)

Notice this type is recursive — Arrays and Objects can refer to other JSONValues.

Protocols, especially network protocols, are another situation where sum types frequently come up. Network packets will often contain a bitfield of some sort describing the type of packet, followed by a payload, just like discriminated unions. This same structure is also used to communicate over channels or queues between concurrent processes. Some example protocol definitions modeled as sum types:

data JobQueueCommand = Quit | LogStatus | RunJob Job
data AuthMethod = Cookie Text | Secret ByteString | Header ByteString

Sum types also come up when whenever sentry values are needed in API design.

Approximating Sum Types

If you’ve ever tried to implement a JSON AST in a language like C++ or Java or Go you will see that the lack of sum types makes safely and efficiently expressing the possibilities challenging. There are a couple ways this is typically handled. The first is with a record containing many optional values.

struct JSONValue {
  JSONBoolean* b;
  JSONString* s;
  JSONNumber* n;
  JSONArray* a;
  JSONObject* o;
}

The implied invariant is that only one value is defined at a time. (And perhaps, in this example, JSON null is represented by all pointers in JSONValue being null.) This limitation here is that nothing stops someone from making a JSONValue where, say, both a and o are set. Is it an array? Or an object? The invariant is broken, so it’s ambiguous. This costs us some type safety. This approximation, by the way, is equivalent to Go’s errors-as-multiple-return-values idiom. Go functions return a result and an error, and it’s assumed (but not enforced) that only one is valid at a time.

Another approach to approximating sum types is using an interface and classes like the following Java:

interface JSONValue {}
class JSONBoolean implements JSONValue { bool value; }
class JSONString implements JSONValue { String value; }
class JSONArray implements JSONValue { ArrayList<JSONValue> elements; }
// and so on

To check the specific type of a JSONValue, you need a runtime type lookup, something like C++’s dynamic_cast or Go’s type switch.

This is how Go, and many C++ and Java JSON libraries, represent the AST. The reason this approach isn’t ideal is because there’s nothing stopping anyone from deriving new JSONValue classes and inserting them into JSON arrays or objects. This weakens some of the static guarantees: given a JSONValue, the compiler can’t be 100% sure that it’s only a boolean, number, null, string, array, or object, so it’s possible for the JSON AST to be invalid. Again, we lose type safety without sum types.

There is a third approach for implementing sum types in languages without direct support, but it involves a great deal of boilerplate. In the next section I’ll discuss how this can work.

The Expression Problem

Sometimes, when it comes up that a particular language doesn’t support sum types (as most mainstream languages don’t), people make the argument "You don’t need sum types, you can just use an interface for the type and a class for each constructor."

That argument sounds good at first, but I’ll explain generally that interfaces and sum types have different (and somewhat opposite) use cases.

As I mentioned in the previous section, it’s common in languages without sum types, such as Java and Go, to represent a JSON AST as follows:

interface JSONValue {}
class JSONBoolean implements JSONValue { bool value; }
class JSONString implements JSONValue { String value; }
class JSONNumber implements JSONValue { double value; }
class JSONArray implements JSONValue { ArrayList<JSONValue> elements; }
class JSONObject impleemnts JSONValue { HashMap<String, JSONValue> properties; }

As I also mentioned, this structure does not rigorously enforce that the ONLY thing in, say, a JSON array is a null, a boolean, a number, a string, an object, or another array. Some other random class could derive from JSONValue, even if it’s not a sensible JSON value. The JSON encoder wouldn’t know what to do with it. That is, interfaces and derived classes here are not as type safe as sum types, as they don’t enforce valid JSON.

With sum types, given a value of type JSONValue, the compiler and programmer know precisely which cases are possible. Thus, any code in the program can safely and completely enumerate the possibilities. Thus, we can use JSONValue anywhere in the program without modifying the cases at all. But if we add a new case to JSONValue, then we potentially have to update all uses. That is, it is much easier to use the sum type in new situations than to modify the list of cases. (Imagine how much code you’d have to update if someone said "Oh, by the way, all pointers in this Java program can have a new state: they’re either null, valid, or lazy, in which case you have to force them. (Remember that nullable references are a limited form of sum types.) That would require a blood bath of code updates across all Java programs ever written.)

The opposite situation occurs with interfaces and derived classes. Given an interface, you don’t know what class implements it — code consuming an interface is limited to the API provided by the interface. This gives a different degree of freedom: it’s easy to add new cases (e.g. classes deriving from the interface) without updating existing code, but your uses are limited to the functionality exposed by the interface. To add new methods to the interface, all existing implementations must be updated.

To summarize:

  • With sum types, it’s easy to add uses but hard to add cases.
  • With interfaces, it’s easy to add cases but hard to add uses.

Each has its place and neither is a great substitute for the other. This is known as The Expression Problem.

To show why the argument that sum types can be replaced with interfaces is weak, let us reduce the scenario to the simplest non-enumeration sum type: Maybe.

interface Maybe {
}

class Nothing implements Maybe {
}

class Just implements Maybe {
  Object value;
}

What methods should Maybe have? Well, really, all you can do is run different code depending on whether the Maybe is a Nothing or Just.

interface MaybeVisitor {
  void visitNothing();
  void visitJust(Object value);
}

interface Maybe {
  void visit(MaybeVisitor visitor);
}

This is the visitor pattern, which is another way to approximate sum types in languages without them. Visitor has the right maintainability characteristics (easy to add uses, hard to add cases), but it involves a great deal of boilerplate. It also requires two indirect function calls per pattern match, so it’s dramatically less efficient than a simple discriminated union would be. On the other hand, direct pattern matches of sum types can be as cheap as a tag check or two.

Another reason visitor is not a good replacement for sum types in general is that the boilerplate is onerous enough that you won’t start "thinking in sum types". In languages with lightweight sum types, like Haskell and ML and Rust and Swift, it’s quite reasonable to use a sum type to reflect a lightweight bit of user interface state. For example, if you’re building a chat client, you may represent the current scroll state as:

type DistanceFromBottom = Int
data ScrollState = ScrolledUp DistanceFromBottom | PeggedToBottom

This data type only has a distance from bottom when scrolled up, not pegged to the bottom. Building a visitor just for this use case is so much code that most people would sacrifice a bit of type safety and instead simply add two fields.

bool scrolledUp;
int distanceFromBottom; // only valid when scrolledUp

Another huge benefit of pattern matching sum types over the visitor pattern is that pattern matches can be nested or have wildcards. Consider a function that can either return a value or some error type. Haskell convention is that errors are on the Left and values are on the Right branch of an Either.

data FetchError = ConnectionError | PermissionError String | JsonDecodeError
fetchValueFromService :: IO (Either FetchError Int)

-- Later on...

result <- fetchValueFromService
case result of
  Right value -> processResult value
  Left ConnectionError -> error "failed to connect"
  Left (PermissionError username) -> error ("try logging in, " ++ username)
  Left JsonDecodeError -> error "failed to decode JSON"
  _ -> error "unknown error"

Expressing this with the visitor pattern would be extremely painful.

Paul Koerbitz comes to a similar conclusion.

Named Variants? or Variants as Types?

Now I’d like to talk a little about sum types are specified from a language design perspective.

Programming languages that implement sum types have to decide how the ‘tag’ of the sum type is represented in code. There are two main approaches languages take. Either the cases are given explicit names or each case is specified with a type.

Haskell, ML, Swift, and Rust all take the first approach. Each case in the type is given a name. This name is not a type – it’s more like a constant that describes which ‘case’ the sum type value currently holds. Haskell calls the names "type constructors" because they produce values of the sum type. From the Rust documentation:

enum Message {
    Quit,
    ChangeColor(i32, i32, i32),
    Move { x: i32, y: i32 },
    Write(String),
}

Quit and ChangeColor are not types. They are values. Quit is a Message by itself, but ChangeColor is a function taking three ints and returning Message. Either way, the names Quit, ChangeColor, Move, and Write indicate which case a Message contains. These names are also be used in pattern matches. Again, from the Rust documentation:

fn quit() { /* ... */ }
fn change_color(r: i32, g: i32, b: i32) { /* ... */ }
fn move_cursor(x: i32, y: i32) { /* ... */ }
match msg {
    Message::Quit => quit()
    Message::ChangeColor(r, g, b) => change_color(r, g, b),
    Message::Move { x: x, y: y } => move_cursor(x, y),
    Message::Write(s) => println!("{}", s),
};

The other way to specify the cases of a sum type is to use types themselves. This is how C++’s boost.variant and D’s std.variant libraries work. An example will help clarify the difference. The above Rust code translated to C++ would be:

struct Quit {};
struct ChangeColor { int r, g, b; };
struct Move { int x; int y; };
struct Write { std::string message; };
typedef variant<Quit, ChangeColor, Move, Write> Message;

msg.match(
  [](const Quit&) { quit(); },
  [](const ChangeColor& cc) { change_color(cc.r, cc.g, cc.b); },
  [](const Move& m) { move_cursor(m.x, m.y); },
  [](const Write& w) { std::cout << w.message << std::endl; }
);

Types themselves are used to index into the variant. There are several problems with using types to specify the cases of sum types. First, it’s incompatible with nested pattern matches. In Haskell I could write something like:

type MouseButton = LeftButton | RightButton | MiddleButton | ExtraButton Int
type MouseEvent = MouseDown MouseButton Int Int | MouseUp MouseButton Int Int | MouseMove Int Int

-- ...

case mouseEvent of
  MouseDown LeftButton x y -> beginDrag x y
  MouseUp LeftButton x y -> endDrag x y

In C++, using the variant<> template described above, I’d have to do something like:

struct LeftButton {};
struct RightButton {};
struct MiddleButton {};
struct ExtraButton { int b; };
typedef variant<LeftButton, RightButton, MiddleButton, ExtraButten> MouseButton;

struct MouseDown { MouseButton button; int x; int y; };
struct MouseUp { MouseButton button; int x; int y; };
struct MouseMove { int x; int y; };
typedef variant<MouseDown, MouseUp, MouseMove> MouseEvent;

// given: MouseEvent mouseEvent;

mouseEvent.match(
  [](const MouseDown& event) {
    event.match([](LeftButton) {
      beginDrag(event.x, event.y);
    });
  },
  [](const MouseUp& event) {
    event.match([](LeftButton) {
      endDrag(event.x, event.y);
    });
  }
);

You can see that, in C++, you can’t pattern match against MouseDown and LeftButton in the same match expression.

(Note: It might look like I could compare with == to simplify the code, but in this case I can’t because the pattern match extracts coordinates from the event. That is, the coordinates are a "wildcard match" and their value is irrelevant to whether that particular branch runs.)

Also, it’s so verbose! Most C++ programmers I know would give up some type safety in order to fit cleanly into C++ syntax, and end up with something like this:

struct Button {
  enum ButtonType { LEFT, MIDDLE, RIGHT, EXTRA } type;
  int b; // only valid if type == EXTRA
};

struct MouseEvent {
  enum EventType { MOUSE_DOWN, MOUSE_UP, MOUSE_MOVE } type;
  Button button; // only valid if type == MOUSE_DOWN or type == MOUSE_UP
  int x;
  int y;
};

Using types to index into variants is attractive – it doesn’t require adding any notion of type constructors to the language. Instead it uses existing language functionality to describe the variants. However, it doesn’t play well with type inference or pattern matching, especially when generics are involved. If you pattern match using type names, you must explicitly spell out each fully-qualified generic type, rather than letting type inference figure out what is what:

auto result = some_fallible_function();
// result is an Either<Error, std::map<std::string, std::vector<int>>>
result.match(
  [](Error& e) {
    handleError(e);
  },
  [](std::map<std::string, std::vector<int>>& result) {
    handleSuccess(result);
  }
);

Compare to the following Haskell, where the error and success types are inferred and thus implicit:

result <- some_fallible_action
case result of
  Left e ->
    handleError e
  Right result ->
    handleSuccess result

There’s an even deeper problem with indexing variants by type: it becomes illegal to write variant<int, int>. How would you know if you’re referring to the first or second int? You might say "Well, don’t do that", but in generic programming that can be difficult or annoying to work around. Special-case limitations should be avoided in language design if possible – we’ve already learned how annoying void can be in generic programming.

These are all solid reasons, from a language design perspective, to give each case in a sum type an explicit name. This could address many of the concerns raised with respect to adding sum types to the Go language. (See also this thread). The Go FAQ specifically calls out that sum types are not supported in Go because they interact confusingly with interfaces, but that problem is entirely sidestepped by named type constructors. (There are other reasons retrofitting sum types into Go at this point is challenging, but their interaction with interfaces is a red herring.)

It’s likely not a coincidence that languages with sum types and type constructors are the same ones with pervasive type inference.

Summary

I hope I’ve convinced you that sum types, especially when paired with pattern matching, are very useful. They’re easily one of my favorite features of Haskell, and I’m thrilled to see that new languages like Rust and Swift have them too. Given their utility and generality, I expect more and more languages to grow sum types in one form or another. I hope the language authors do some reading, explore the tradeoffs, and similarly come to the conclusion that Haskell, ML, Rust, and Swift got it right and should be copied, especially with respect to named cases rather than "union types". :)

To summarize, sum types:

  • provide a level of type safety not available otherwise.
  • have an efficient representation, more efficient than vtables or the visitor pattern.
  • give programmers an opportunity to clearly describe possibilities.
  • with pattern matching, provide excellent safety guarantees.
  • are an old idea, and are finally coming back into mainstream programming!

I don’t know why, but there’s something about the concept of sum types that makes them easy to dismiss, especially if you’ve spent your entire programming career without them. It takes experience living in a sum types world to truly internalize their value. I tried to use compelling, realistic examples to show their utility and I hope I succeeded. :)

Endnotes

Terminology

In this article, I’ve used the name sum type, but tagged variant, tagged union, or discriminated union are fine names too. The phrase sum type originates in type theory and is a denotational description. The other names are operational in that they describe the implementation strategy.

Terminology is important though. When Rust introduced sum types, they had to name them something. They happened to settle on enum, which is a bit confusing for people coming from languages where enums cannot carry payloads. There’s a corresponding argument that they should have been called union, but that’s confusing too, because sum types aren’t about sharing storage either. Sum types are a combination of the two, so neither keyword fits exactly. Personally, I’m partial to Haskell’s data keyword because it is used for both sum and product types, sidestepping the confusion entirely. :)

More Reading

If you’re convinced, or perhaps not yet, and you’d like to read more, some great articles have been written about the subject:

Wikipedia has two excellent articles, distinguishing between Tagged Unions and Algebraic Data Types.

Sum types and pattern matching go hand in hand. Wikipedia, again, describes pattern matching in general.

TypeScript’s union types are similar to sum types, though they don’t use named type constructors.

FP Complete has another nice introduction to sum types.

For what it’s worth, Ada has had a pretty close approximation of sum types for decades, but it did not spread to other mainstream languages. Ada’s implementation isn’t quite type safe, as accessing the wrong case results in a runtime error, but it’s probably close enough to safe in practice.

Much thanks goes to Mark Laws for providing valuable feedback and corrections. Of course, any errors are my own.