JSON Never Dies – An Efficient, Queryable Binary Encoding

A familiar story: Team starts project. Team uses JSON for interchange; it’s easy and ubiquitous and debuggable. Project grows. Entity types multiply. Data sizes increase. Finally, team notices hundreds of kilobytes — or even megabytes — of JSON is being generated, transferred, and parsed.

“Just use a binary format!” the Internet cries.

But by the time a project gets to this stage, it’s often a nontrivial amount of work to switch to Protocol Buffers or Cap’n Proto or Thrift or whatever. There might be thousands of lines of code for mapping model objects to and from JSON (that is arrays, objects, numbers, strings, and booleans). And if you’re talking about some hand-rolled binary format, it’s even worse: you need implementations for all of your languages and to make sure they’re fuzzed and secure.

The fact is, the activation energy required to switch from JSON to something else is high enough that it rarely happens. The JSON data model tends to stick. However, if we could insert a single function into the network layer to represent JSON more efficiently, then that could be an easy, meaningful win.

“Another binary JSON? We already have CBOR, MsgPack, BSON, and UBSON!” the Internet cries.

True, but, at first glance, none of those formats actually seem all that great. JSON documents tend to repeat the same strings over and over again, but those formats don’t support reusing string values or object shapes.

This led me to perform some experiments. What might a tighter binary representation of JSON look like?

What’s in a format?

First, what are some interesting properties of a file format?

  • Flexibility and Evolvability Well, we’re talking about representing JSON here, so they’re all going to be similar. However, some of the binary JSON replacements also have support for dates, binary blobs, and 64-bit integers.
  • Size How efficiently is the information encoded in memory? Uncompressed size matters because it affects peak memory usage and it’s how much data the post-decompression parser has to touch.
  • Compressibility Since you often get some sort of LZ compression “for free” in the network or storage interface, there’s value in the representation being amenable to those compression algorithms.
  • Code Size How simple are the encoders and decoders? Beyond the code size benefits, simple encoders and decoders are easier to audit for correctness, resource consumption bounds, and security vulnerabilities.
  • Decoder Speed How quickly can the entire file be scanned or processed? For comparison, JSON can be parsed at a rate of hundreds of MB/s.
  • Queryability Often we only want a subset of the data given to us. Does the format allow O(1) or O(lg N) path queries? Can we read the format without first parsing it into memory?

Size and parse-less queryability were my primary goals with JND. My hypothesis was that, since many JSON documents have repeating common structures (including string keys), storing strings and object shapes in a table would result in significant size wins.

Quickly glancing at the mainstream binary JSON encodings…

MsgPack

Each value starts with a tag byte followed by its payload. e.g. “0xdc indicates array, followed by a 16-bit length, followed by N values”. Big-endian integers.

Must be parsed before querying.

BSON

Per spec, does not actually seem to be a superset of JSON? Disallows nuls in object keys, and does not support arrays as root elements, as far as I can tell.

Otherwise, similar encoding as MsgPack. Each value has a tag byte followed by a payload. At least it uses little-endian integers.

UBJSON

Same idea. One-byte tags for each value, followed by a payload. Notably, lists and objects have terminators, and may not have an explicit length. Kind of an odd decision IMO.

Big-endian again. Weird.

CBOR

IETF standard. Has a very intentional specification with documented rationales. Supports arrays of known sizes and arrays that terminate with a special “break” element. Smart 3-bit major tag followed by 5-bit length with special values for “length follows in next 4 bytes”, etc.

Big-endian again… Endianness doesn’t matter all that much, but it’s kind of weird to see formats using the endianness that’s less common these days.

CBOR does not support string or object shape tables, but at first glance it does not seem like CBOR sucks. I can imagine legitimate technical reasons to use it, though it is a quite complicated specification.

JND!

Okay! All of those formats have roughly the same shape. One byte prefixes on every value, value payloads in line (and thus values are variable-width).

Now it’s time to look at the format I sketched up.

The file consists of a simple header marking the locations and sizes of three tables: values, strings, and object shapes.

The string table consists of raw UTF-8 string data.

In the value table, every value starts with a tag byte (sound familiar?). The high nibble encodes the type. The low nibble contains two 2-bit values, k and n.

Consider strings:

0011 kknn [1<<k bytes] [1<<n bytes] - string: offset, length into string table

0011 indicates this is a string value. k and n are “size tags” which indicate how many bytes encode the integers. The string offset is 1 << k bytes (little-endian) and the string length is 1 << n bytes. Once the size tags are decoded, the actual offset and length values are read following the tag byte, and the resulting indices are used to retrieve the UTF-8 text at the given offset and length from the string table.

Now let’s look at objects:

1000 kknn [1<<k bytes] - object, object index, then <length> values of width n

The following 1 << k bytes encode the index into the object shape table, which holds the number and sorted list of object keys. Afterwards is a simple list of indices into the value table, each of size 1 << n bytes. The values are matched up with the keys in the object shape table.

Arrays are similar, except that instead of using an object index, they simply store their length.

This encoding has the property that lookups into an array are O(1) and lookups into an object are O(lg N), giving efficient support for path-based queries.

But there’s a pretty big downside relative to MsgPack, CBOR, and the like. The cost of efficient random access is that the elements of arrays and objects must have a known size. Thus, the (variable-width) values themselves cannot be stored directly into the array’s element list. Instead, arrays and objects have a list of fixed-width numeric offsets into the value table. This adds a level of indirection, and thus overhead, to JND. The payoff is that once a particular value is written (like a string or double-precision float), its index can be reused and referenced multiple times.

So how does this play out in practice? Net win or loss?

Size Benchmarks

I used sajson’s standard corpus of test JSON documents.

  • apache_builds Root object, largely consists of an array containing many three-element objects.
  • github_events Mix of objects, arrays, and long URL strings.
  • getinitialstate I can’t share the contents of this document as it’s from a project at work, but it’s 1.7 MB of various entity definitions, where each entity type is an object with maybe a dozen or two fields.
  • instruments A huge number of repeated structures and data — very compressible.
  • mesh 3D geometry. Basically a giant list of floating point and integer numbers.
  • svg_menu Only 600 bytes – used to test startup and base overhead costs.
  • twitter List of fairly large objects, many long strings.
  • update-center From Jenkins. Mostly consists of an object representing a mapping from plugin name to plugin description.

Conclusions

We can draw a few conclusions from these results.

  • As a wire replacement for the JSON data model, there’s no apparent reason to use BSON or UBSON. I’d probably default to MsgPack because it’s simple, but CBOR might be okay too.
  • The current crop of binary formats don’t compress any better than JSON itself, so if size over the wire is your primary concern, you might as well stick with JSON.
  • Except for small documents, where JND’s increased representation overhead dominates, JND is pretty much always smaller uncompressed. As predicted, reusing strings and object shapes is a big win in practice.
  • LZ-style compression algorithms don’t like JND much. Not a big surprise, since they don’t do a good job with sequences of numeric values. I expect delta coding value offsets in arrays and objects would help a lot, at the cost of needing to do a linear pass from delta to absolute at encode and decode time.

JND’s disadvantage is clear in the above graphs: while it’s smaller uncompressed, it does not compress as well as JSON or MsgPack. (Except in cases where its uncompressed size is dramatically smaller because of huge amounts of string or object shape reuse.)

Where would something like JND be useful? JND’s big advantage is that it can be read directly without allocating or parsing into another data structure. In addition, the uncompressed data is relatively compact. So I could imagine it being used if IO is relatively cheap but tight memory bounds are required.

Another advantage of JND is a bit less obvious. In my experience using sajson from other languages, the parse itself is cheaper than converting the parsed AST into values in the host language. For example, constructing Swift String objects from binary UTF-8 data was an order of magnitude slower than the parse itself. In JND, every unique string would naturally be translated into host language String objects once.

If you’d like to play with my experiments, they’re on GitHub at https://github.com/chadaustin/jnd.

Future Work?

It was more work than I could justify for this experiment, but I’d love to see how Thrift or Protocol Buffers compare to JND. JND is a self-describing format, so it’s going to lose on small messages, but when data sizes get big I wouldn’t be surprised if it at least ties protobufs.

Update: Forgot to mention decode times. I didn’t have time to set up a proper benchmark with high-performance implementations of each of these formats (rather than whatever pip gave me), but I think we can safely assume CBOR, MsgPack, BSON, and UBSON all have similar parse times, since the main structure of the decoder loop would be the same. The biggest question would be: how does that compare to JSON? And how does JND compare to both?

Writing a Really, Really Fast JSON Parser

Several holidays ago, I got a bee in my bonnet and wrote a fast JSON parser whose parsed AST fits in a single contiguous block of memory. The code was small and simple and the performance was generally on-par with RapidJSON, so I stopped and moved on with my life.

Well, at the end of 2016, Rich Geldreich shared that he’d also written a high-performance JSON parser.

I dropped his pjson into my benchmarking harness and discovered it was twice as fast as both RapidJSON and sajson! Coupled with some major JSON parsing performance problems in a project at work, I was nerd-sniped again.

I started reading his code but nothing looked particularly extraordinary. In fact, several things looked like they ought to be slower than sajson… Oh wait, what’s this?

do {
    c = pStr[0]; if (globals::s_parse_flags[c] & 1)
        { ++pStr; break; }
    c = pStr[1]; if (globals::s_parse_flags[c] & 1)
        { pStr += 2; break; }
    c = pStr[2]; if (globals::s_parse_flags[c] & 1)
        { pStr += 3; break; }
    c = pStr[3];
    pStr += 4;
} while (!(globals::s_parse_flags[c] & 1));

Why is unrolling that loop a win? How is the parse flags look-up (even L1 is 3-4 cycle latency) better than some independent comparisons?

Guess it was time to do some instruction dependency analysis!

Fast String Parsing

The problem at hand: given a pointer to the first byte after the opening quote of a string, find the first special byte, where special bytes are “, \, <0x20, or >0x7f.

The loop:

for (;;) {
    if (*p < 0x20 || *p > 127 || *p == '"' || *p == '\\') {
        break;
    }
    ++p;
}

roughly breaks down into:

begin_loop:
    load c, [p]
    compare c, 0x20
    jump-if-less exit_loop
    compare c, 0x7f
    jump-if-greater exit_loop
    compare c, '"'
    jump-if-equal exit_loop
    compare c, '\\'
    jump-if-equal exit_loop
    increment p
    jump begin_loop
exit_loop:

How do we think about the performance of this loop? Remember that mainstream CPUs are out-of-order and can execute four (or more!) instructions in parallel. So an approximate mental model for reasoning about CPU performance is that they understand multiple instructions per cycle, then stick them all into an execution engine that can execute multiple instructions per cycle. Instructions will execute simultaneously if they’re independent of each other. If an instruction depends on the result of another, it must wait N cycles, where N is the first instruction’s latency.

So, assuming all branches are correctly predicted (branch predictors operate in the frontend and don’t count towards dependency chains), let’s do a rough estimate of the cost of the loop above:

load c, [p] # depends on value of p
# assuming L1, ~4 cycle latency
# all four comparisons can be done in parallel
# assumption: jumps are all predicted
increment p
# jump to beginning

increment p is the only instruction on the critical path – it carries a dependency across iterations of the loop. Is there enough work to satisfy the execution resources during the increment? Well, the comparisons are independent so they can all be done in parallel, and there are four of them, so we can probably keep the execution units busy. But it does mean that, at most, we can only check one byte per cycle. In reality, we need to issue the load and increment too, so we’re looking at a loop overhead of about 2-3 cycles per byte.

Now let’s look more closely at Rich’s code.

Replacing the comparisons with a lookup table increases comparison latency (3-4 cycles latency from L1) and increased comparison throughput (multiple loads can be issued per cycle).

So let’s spell out the instructions for Rich’s code (reordered for clarity):

begin_loop:
    load c0, [p]
    load c1, [p+1]
    load c2, [p+2]
    load c3, [p+3]
    load t0, [is_plain_string+c0]
    load t1, [is_plain_string+c1]
    load t2, [is_plain_string+c2]
    load t3, [is_plain_string+c3]
    compare t0, 0
    jump-if-equal exit_loop
    compare t1, 0
    jump-if-equal increment_1_and_exit_loop
    compare t2, 0
    jump-if-equal increment_2_and_exit_loop
    compare t3, 0
    jump-if-equal increment_3_and_exit_loop
    add p, 4
    jump begin_loop
increment_1_and_exit_loop:
    add p, 1
    jump exit_loop
increment_2_and_exit_loop:
    add p, 2
    jump exit_loop
increment_3_and_exit_loop:
    add p, 3
exit_loop:
  • Load four bytes per iteration [not on critical execution path]
  • Load four LUT entries [not on critical execution path]
  • Issue four comparisons per cycle [not on critical execution path]
  • add p, 4

Again, the critical path is only add p, 4, but we still need to issue the other instructions. The difference is that now all of the loads happen in parallel and the comparisons for 4 bytes happen in parallel too, rather than doing four comparisons per byte.

It’s still hard to say if this is a win on paper — Haswell can only issue 2 loads per cycle. But the loads can overlap with the comparisons from previous bytes). However, we still have to issue all of these instructions. So maybe we’re looking at something closer to 2 cycles per byte?

Empirically, at least on my Broadwell, replacing four comparisons with a LUT was definitely a win. 55cc213

But is unrolling the loop necessary? If I take out the unrolling but leave the LUT, clang gets slower but gcc stays the same. I checked – neither gcc nor clang do any automatic unrolling here. What’s going on? Branch predictor effects? Also, Intel’s Intel Architecture Code Analyzer tool says that the unrolled and non-unrolled loops are both frontend-bound and have the same loop throughput.

I’m not yet convinced that a tight dependency chain is why unrolling is a win. More investigation is required here. But the important thing to keep in mind is to pay attention to the critical path latency through a loop body as well as the number of independent operations that can soak up spare execution bandwidth.

Lead Bullets

After playing around with LUTs and unrolling, I started implementing a bunch of small optimizations that I’d long known were available but didn’t consider to be that important.

Well, as it often turns out in performance projects, a sequence of 2% gains adds up to significant wins over time! If you’ve read Ben Horowitz’s lead bullets story or how SQLite achieved a 50% speed up, this will sound familiar.

Here are the optimizations that mattered:

  • Moved the input and output pointers into locals instead of members, which helps VC++ and Clang understand that they can be placed in registers. (gcc was already performing that optimization.) 71078d3 4a07c77
  • Replaced the entire parse loop with a goto state machine. Surprisingly, not only was this a performance win, but it actually made the code clearer. 3828769 05b3ec8 c02cb31
  • Change an 8-entry enum to a uint8_t instead of the pointer-sized value it was defaulting to. 799c55f
  • Duplicated a bit of code to avoid needing to call a function pointer. 44ec0df
  • Tiny microoptimizations like avoiding branches and unnecessary data flow. 235d330 002ba31 193b183 c23fa23
  • Store the tag bits at the bottom of the element index instead of the top, which avoids a shift on 64-bit. e7f2351

Static Branch Prediction

I also spent a bit of time on static branch prediction. It’s a questionable optimization; in theory, you should just use profile-guided optimization (PGO) and/or rely on the CPU’s branch predictors, but in practice few people actually bother to set up PGO. Plus, even though the CPU will quickly learn which branches are taken, the compiler doesn’t know. Thus, by using static branch prediction hints, the compiler can line up the instructions so the hot path flows in a straight line and all the cold paths are off somewhere else, sometimes saving register motion in the hot path.

For some examples, look at these uses of SAJSON_LIKELY and SAJSON_UNLIKELY.

I can’t recommend spending a lot of time on annotating your branches, but it does show up as a small but measurable win in benchmarks, especially on smaller and simpler CPUs.

Things I Didn’t Do

  • Unlike Rich’s parser and RapidJSON, I chose not to optimize whitespace skipping. Why? Not worth the code size increase – the first thing someone who cares about JSON parsing performance does is minify the JSON. b05082b
  • I haven’t yet optimized number parsing. Both RapidJSON and Rich’s parser are measurably faster there, and it would be straightforward to apply the techniques. But the documents I regularly deal with are strings and objects and rarely contain numbers.

Benchmark Results

I tested on four devices: my desktop (high-clocked Haswell), laptop (low-clocked Broadwell), iPhone SE, and Atom-based home server.

Desktop

Dell XPS 13

iPhone SE

Atom D2700

The charts aren’t very attractive, but if you look closely, you’ll notice a few things:

  • Parsing JSON on modern CPUs can be done at a rate of hundreds of megabytes per second.
  • gcc does a much better job with RapidJSON than either clang or MSVC.
  • JSON parsing benefits from x64 – it’s not a memory-bound or cache-bound problem, and the extra registers help a lot.
  • The iPhone SE is not much slower than my laptop’s Broadwell. :)

The Remainder of the Delta

As you can see in the charts above, sajson is often faster than RapidJSON, but still not as fast as Rich’s pjson. Here are the reasons why:

  1. sajson does not require the input buffer to be null-terminated, meaning that every pointer increment requires a comparison with the end of the buffer (to detect eof) in addition to the byte comparison itself. I’ve thought about changing this policy (or even adding a compile-time option) but I really like the idea that I can take a buffer straight from a disk mmap or database and pass it straight to sajson without copying. On the other hand, I measured about a 10% performance boost from avoiding length checks.

  2. sajson sorts object keys so that object lookup takes logarithmic rather than linear time. The other high-performance parsers have linear-time object lookup by key. This is an optimization that, while not necessary for most use cases, avoids any accidental worst-case quadratic-time usage.

  3. sajson’s contiguous AST design requires, for every array, shifting N words in memory where N is the number of elements in the array. The alternative would be to use growable arrays in the AST (requiring that they get shifted as the array is realloc’d). Hard to say how much this matters.

Aside: The “You Can’t Beat the Compiler” Myth

There’s this persistent idea that compilers are smart and will magically turn your code into something that maps efficiently to the machine. That’s only approximately true. It’s really hard for compilers to prove the safety (or profitability) of certain transformations and, glancing through the produced code for sajson, I frequently noticed just plain dumb code generation. Like, instead of writing a constant into a known memory address, it would load a constant into a register, then jump to another location to OR it with another constant, and then jump somewhere else to write it to memory.

Also, just look at the charts – there are sometimes significant differences between the compilers on the same code. Compilers aren’t perfect and they appreciate all the help you can give!

Benchmarking

Measuring the effect of microoptimizations on modern computers can be tricky. With dynamic clock frequencies and all of today’s background tasks, the easiest way to get stable performance numbers is to take the fastest time from all the runs. Run your function a couple thousand times and record the minimum. Even tiny optimizations will show up this way.

(If a library that uses statistical techniques to automatically achieve precise results is readily available, consider using that.)

I also had my JSON benchmarks report MB/s, which normalizes the differences between test files of different sizes. It also helps understand parser startup cost (when testing small files) and the differences in parse speed between files with lots of numbers, strings, huge objects, etc.

Swift Bindings

Dropbox (primarily @aeidelson) contributed high-quality Swift bindings for sajson. The challenge in writing these bindings was finding a way to efficiently expose the sajson parse tree to Swift. It turns out that constructing Swift arrays and objects is REALLY expensive; we once benchmarked 10 ms in sajson’s parser and 400 ms of Swift data structure construction.

Fortunately, Swift has decent APIs for unsafely manipulating pointers and memory, and with those we implemented the ability to access sajson AST nodes through a close-to-natural ValueReader type.

Converting sajson strings (e.g. ranges of UTF-8 memory) into Swift Strings is still expensive, but Swift 4 might improve the situation:

An iOS team at Dropbox replaced JSONSerialization with sajson and cut their initial load times by two thirds!

Summary

I used to think JSON parsing was not something you ever wanted in your application’s critical path. It’s certainly not the kind of algorithm that modern computers love (read byte, branch, read byte, branch). That said, this problem has been beaten to death. We now have multiple parsers that can parse data at hundreds of megabytes per second — around the same rate as SHA-256! If we relaxed some of the constraints on sajson, it could even go faster.

So how fast was Rich’s parser, after all? When measured in Clang and MSVC, quite a lot, actually. But when measured in GCC, RapidJSON, sajson, and pjson were (and remain) very similar. Many of the differences come down to naive, manually-inlined code, which we know the compiler can reliably convert to a fast sequence of instructions. It’s annoying to eschew abstractions and even duplicate logic across several locations, but, in the real world, people use real compilers. For critical infrastructure, it’s worth taking on some manual optimization so your users don’t have to care.

Update

Arseny Kapoulkine pointed out an awesome trick to me: replace the last byte of the buffer with 0, which lets you avoid comparing against the end of the buffer. When you come across a 0 byte, if it’s the last one, treat it as if it’s the byte that got replaced. This would be a material performance win in sajson _and_ avoid the need for specifying null-terminated buffers.

Announcing BufferBuilder: Encode JSON in Haskell 2-5x faster than Aeson

Conclusion

This tale is a little long and detailed, so for those of you who want immediate payoff…

Andy and I wrote a Haskell library called Data.BufferBuilder (including Data.BufferBuilder.Utf8 and Data.BufferBuilder.Json) which makes it easy to safely and efficiently encode Haskell data structures into JSON. In our benchmarks, using Data.BufferBuilder.Json to encode JSON is 4-5x as fast as using Aeson.

Even if you’re already using Aeson, you can benefit somewhat from BufferBuilder’s improved encoding performance. The buffer-builder-aeson package adds a ToJson instance for Aeson’s Value type, which our benchmarks show is 50% to 100% faster than Aeson’s built-in encoder. All you need to do is call Data.BufferBuilder.Json.encodeJson instead of Data.Aeson.encode!

Why did we build BufferBuilder?

Some of IMVU’s backend services are written in Haskell. While Haskell is incredible for many use cases, we ran into an unexpected bottleneck: JSON encoding. Our service response structure produces quite a lot of JSON, and much of that JSON contains URLs encoded into JSON strings.

Amazingly, URL and JSON encoding was showing up as a significant cost center when generating JSON responses. Some services spent over a second encoding JSON!

When faced with a performance problem, my first instinct is to either pound the problem into the ground or unmake the problem so the code isn’t necessary in the first place.

So let’s look at the problem holistically:

  • A JSON response is represented in memory as a collection of varied, in-memory data structures. The response happens to contain many URLs — sometimes more than a hundred.
  • URLs are represented by a data structure consisting of the protocol, hostname, path segments, query string, and so on.
  • Each URL becomes a JSON string.
  • Each in-memory data structure is converted to a JSON object whose properties depend on the type on the corresponding data structure.

Using Aeson to encode all of this results in the following steps:

  • ToJSON instances convert Haskell data types to an AST of Aeson Values.
  • The keys of an Aeson object are Text values. In memory, Text is encoded in UTF-16. Thus, URLs must be translated from their in-memory representation (ours is ASCII) into UTF-16 before they fit into the Aeson AST.
  • Then, the entity bodies are converted into JSON objects, where the keys are known at compile-time, but must be encoded as UTF-16.
  • The entire AST is encoded into a Lazy Text Builder.
  • Then the Lazy Text is encoded into a Lazy ByteString containing UTF-8.
  • Then, to generate the response body, the Lazy ByteString is concatenated into a single strict ByteString.

That’s a lot of steps! To summarize:

(URLs and Custom Data Types) -> Aeson AST -> Lazy Text -> Lazy UTF-8 ByteString -> Strict ByteString

The Design of BufferBuilder

This is where Andy and I sat down to create an API to let us cleanly express JSON encoding without sacrificing type safety OR performance.

We know that a fast, if not the fastest, way to build up a buffer of bytes is to allocate a chunk of memory, stream writes to it, and either chunk or realloc() as needed. Obviously, this kind of code can be trivially expressed in C:

void buffer_append(buffer* b, const char* s, size_t length) {
    if (!b->has_room(length)) {
        b->grow(length);
    }
    memcpy(b->data + b->size, s, length);
}

Because I’d been told that bouncing between Haskell and C with the foreign function interface can be slow, my first approach was to attempt to build a Haskell monad that grabbed the RealWorld token out of IO (IO a is basically a newtype around RealWorld -> (RealWorld, a)), augmented it with some extra “local variables” like the output ptr, capacity, and current size, and manually implemented allocation and memory writes with GHC.Prim APIs. GHC did not like this at all. The generated code ran 20 times slower than naive usage of Data.ByteString.Builder. Nonetheless, it was an interesting technique, so maybe I’ll write about it another time.

Surely it was possible to do better. So I tried the foreign function interface after all.

I wrote a tiny C API that allocated the underlying growable buffer. It provided APIs for appending bytes, buffers, UTF-16-to-UTF-8 transcoding, and so on. These FFI calls can only happen within IO actions, but building buffers is fundamentally a pure operation, and the provided Haskell API should be effectively pure. The solution is to offer a restricted state monad like ST which limits the operations within the monad to safe buffer-building operations.

This approach was by the fastest of any that I tried. In fact, in a sequence of appendBS operations, if the arguments are known-strict ByteStrings, GHC will compile the appendBS sequence directly into a series of cheap C function calls. For example, the following code:

data BSTriple = BSTriple !ByteString !ByteString !ByteString

writeBSTriple :: BSTriple -> BufferBuilder ()
writeBSTriple !(BSTriple a b c) = do
    appendBS a
    appendBS b
    appendBS c

compiles into something like:


movq %rbx,%rdi
movq 120(%rsp),%rax
movq %rax,%rsi
movq 96(%rsp),%rax
movq %rax,%rdx
movq 112(%rsp),%rax
addq %rax,%rdx
subq $8,%rsp
movl $0,%eax
call bw_append_bs
addq $8,%rsp
movq %rbx,%rdi
movq 152(%rsp),%rax
movq %rax,%rsi
movq 128(%rsp),%rax
movq %rax,%rdx
movq 144(%rsp),%rax
addq %rax,%rdx
subq $8,%rsp
movl $0,%eax
call bw_append_bs
addq $8,%rsp
movq %rbx,%rdi
movq 216(%rsp),%rax
movq %rax,%rsi
movq 160(%rsp),%rax
movq %rax,%rdx
movq 208(%rsp),%rax
addq %rax,%rdx
subq $8,%rsp
movl $0,%eax
call bw_append_bs
addq $8,%rsp

Obviously GHC’s code generator has some room to improve, but the main thrust is exactly right. An equivalent C++ API would generate much the same kind of code, modulo the poor instruction and register selection, which overall doesn’t matter too much here.

In almost all situations, Haskell isn’t going to be as fast as straightforward C, but with some work and appropriate use of FFI, it’s possible to come close.

Data.BufferBuilder.Utf8

Once we had an API to safely and efficiently build up buffers of bytes, we wanted to build safe APIs on top for constructing valid UTF-8 buffers and valid JSON.

Utf8Builder is a newtype around BufferBuilder with a restricted API. If you only call safe functions in Data.BufferBuilder.Utf8, the result is guaranteed to be valid UTF-8. Unsafe functions are provided for when you know precisely what you’re doing.

Data.BufferBuilder.Json

Data.BufferBuilder.Json is built on top of Data.BufferBuilder.Utf8. Data.BufferBuilder.Json’s Value type is a newtype around Utf8Builder, meaning there’s no Aeson-style AST. Each Value simply knows how to write itself into an output buffer. Just like how the safe Utf8Builder functions guarantee the output is legal UTF-8, the safe JsonBuilder functions guarantee (almost) that the output is a legal JSON document. (There are a couple caveats, see the documentation for details.)

I suspect Data.BufferBuilder.Json is approaching the limit of how fast a JSON encoder can be. And thanks to the beauty of Haskell, it’s convenient and safe!

If you’re using Aeson and encoding performance matters to you, give BufferBuilder a shot!

Benchmarking JSON Parsing: Emscripten vs. Native

This post concludes my obsession with JSON parsing. In fact, the entire reason I wrote a JSON parser was to show these graphs. I wanted to see whether I could write a JSON parser faster than any other when run in Emscripten. As vjson is typically faster, I did not succeed unless I requalify my goal as writing the fastest-in-Emscripten JSON parser with a useful parse tree.

This benchmark’s code is on GitHub. I encourage you to reproduce my results and search for flaws.

All benchmarks were run on a 2010 Macbook Pro, 2.53 GHz Core i5, 8 GB 1067 MHz DDR3.

Native vs. Emscripten

First, let’s compare native JSON parsing performance (clang 3.1, -O2) with both stable and nightly versions of Firefox and Chrome.

Two things are immediately clear from this graph. First, native code is still 5-10x faster than Emscripten/JavaScript. Second, yajl and jansson are dramatically slower than rapidjson, sajson, and vjson. Native yajl and jansson are even slower than Emscripten’d sajson. Henceforth I will not include them.

Looking closely at the browser results, a third conclusion is clear. Firefox runs Emscripten’d code much faster than Chrome.

Finally, sajson consistently performs better than rapidjson in my Emscripten tests. And vjson always parses the fastest. I believe this is because Emscripten and browser JITs punish larger code sizes.

The previous graph only shows parse rates by browser and parser for a single file. Next let’s look at parse rates by file.

Yep, native code consistently wins. At this point I want to dig into differences between the browsers, so I will show the same graph but without native code.

Firefox vs. Chrome

Not only is Firefox consistently faster than Chrome, it’s faster by a factor of 2-5x!

Finally, here the same graph but normalized against Firefox 18.

If I were a Firefox JS developer, I’d be feeling pretty proud right now, assuming this experiment holds up to scrutiny. Even so, these results match my experience with Bullet/Emscripten in Chrome: Chrome takes a very long time to stabilize its JIT, and I’ve even seen it fail to stabilize, constantly deopting and then reoptimizing code. In contrast, Firefox may take longer to JIT up front, but performance is smooth afterwards.

Further work is necessary to test the hypothesis that code size is the biggest predictor of Emscripten performance.

Preemptive answers to predicted questions follow:

Well, duh. You shouldn’t use an Emscripten’d JSON parser. You should use the browser’s built-in JSON.parse function.

This isn’t about parsing JSON. This is about seeing whether the web can compete with native code under common workloads. After all, Mozillians have been claiming JavaScript is or will be fast enough to replace native apps. If parsing JSON through a blessed browser API is the only way to do it quickly, then developers are disempowered. What if a new format comes along? Must developers wait on the browser vendors? The web has to be better than that.

Shouldn’t you have compiled the Emscripten code with -fno-exceptions?

Yes. Oops. That said, after generating the graphs, I did recompile with -fno-exceptions and it made no difference to the benchmark results in either Chrome or Firefox.

JSON Parser Benchmarking

With the caveat that each parser provides different functionality and access to the resulting parse tree, I benchmarked sajsonrapidjsonvjsonYAJL, and Jansson.  My methodology was simple: given large-ish real-world JSON files, parse them as many times as possible in one second.  To include the cost of reading the parse tree in the benchmark, I then iterated over the entire document and summed the number of strings, numbers, object values, array elements, etc.

The documents are:

  • apache_builds.json: Data from the Apache Jenkins installation.  Mostly it’s a array of three-element objects, with string keys and string values.
  • github_events.json: JSON data from GitHub’s events feature.  Nested objects several levels deep, mostly strings, but also contains booleans, nulls, and numbers.
  • instruments.json: A very long array of many-key objects.
  • mesh.json: 3D mesh data. Almost entirely consists of floating point numbers.
  • update-center.json: Also from Jenkins though I’m not sure where I found it.

apache_builds.json, github_events.json, and instruments.json are pretty-printed with a great deal of interelement whitespace.

Now for the results. The Y axis is parses per second. Thus, higher is faster.

Core 2 Duo E6850, Windows 7, Visual Studio 2010, x86

Core 2 Duo E6850, Windows 7, Visual Studio 2010, AMD64

Atom D2700, Ubuntu 12.04, clang 3.0, AMD64

Raspberry Pi

Conclusions

sajson compares favorably to rapidjson and vjson, all of which stomp the C-based YAJL and Jansson parsers. 64-bit builds are noticeably faster: presumably because the additional registers are helpful. Raspberry Pis are slow. :)

Comparing JSON Parse Trees

I will soon post performance benchmarks of sajson, rapidjson, vjson, YAJL, and Jansson.

However, before talking about performance, I want to compare the parse trees generated by each library.

sajson

  • Differentiates between integers and doubles
  • Array lengths are known.
  • Arrays are random-access.
  • Object lengths are known.
  • Object keys are sorted.  Thus, random access of object elements is O(lg N).
  • Stored contiguously in memory.

rapidjson

  • At its root, rapidjson is a SAX-style parser.  However, rapidjson does provide a default DOM implementation.
  • Array and Object length are not directly accessible in the API but are accessible via End() - Begin() or MemberEnd() - MemberBegin().
  • Arrays support random access.
  • Object lookup is O(N).
  • Differentiates between integers, unsigned integers, 64-bit integers, 64-bit unsigned integers, and doubles.
  • Parse tree is mutable.

vjson

  • Namespaces its symbols under the generic “json” prefix.
  • Differentiates between integers and floats.  No support for doubles.
  • Array and Object lengths are not stored.  Instead a linked list must be traversed to calculate the length.
  • Random array access is O(N).
  • Object element lookup is O(N), by walking a linked list.
  • Very simple interface.

YAJL

  • Differentiates between integers and doubles.
  • Array and Object lengths are available.
  • Object element lookup is O(N).
  • Parse tree is mutable.

Jansson

  • Differentiates between integers and doubles.
  • Array and Object lengths are available.
  • Array access is O(1).
  • Object element access is O(1) via a hash table.
  • The parse tree is mutable.

sajson 1.0 is ready to go

I’ve fixed all the known bugs and error cases in sajson. If you end up using it, I’d love to hear from you.

I’ll capture some scattered thoughts before my JSON parsing obsession dies down.

There’s not much opportunity to take advantage of a modern CPU’s superscalar execution pipeline when parsing JSON. Parsing JSON generally involves reading a character and branching. Searching for a string’s ending quotes could be implemented with vector instructions (see strchr in Agner Fog’s optimization manual or this tutorial) but the ROI doesn’t seem high to me.

Could we use multiple threads? Maybe, if we split the input text into chunks and had each thread discover the next { or [ and optimistically begin parsing future elements. Almost certainly not worth it

Either way, sajson is approximately done. I will post benchmarks soon.

sajson: Building the Parse Tree

The least trivial algorithm for building sajson’s parse tree is
allocating (or should I say, reserving?) the space in the parse tree
for an array’s or object’s element list without knowing the length in
advance.

Let’s consider an eleven-character JSON text. Imagine we’ve parsed
three characters: [[[. At this point we know two things:
1) we can fit the parse tree in eleven words and 2) there are at least
three arrays.

We don’t know the length of the arrays, so we cannot begin writing the
parse tree records yet.

The file could be [[[0,0,0]]] or [[[[[0]]]]] or [[[0,[0]]]] all of
which have quite different parse tree representations.

My first attempt involved parsing in two passes. The first pass
scanned the JSON text for arrays and objects and temporarily stored
their lengths into safe locations in the parse tree array. The
second pass used that information to correctly lay out the parse tree.

Parsing in two passes worked but had two major disadvantages. First, it was
slow. The scanning phase was simpler than parsing, but not THAT
simpler. Since parsing involves reading one byte and
branching on its value, parsing in two phases was effectively
half the speed. Second, the scanning phase duplicated a fair amount
of parsing code, making it harder to reason about and maintain.

Mike Ralston and I worked out a simpler approach at the cost
of two memory writes per array/object element record.

The gist is: given a parse tree array of size N, start one pointer at
the front and one at the back. Call the one at the front temp, for
temporary storage, and the one at the back out, for the actual parse
tree data.

When encountering the beginning of an array or object, remember the
current temp pointer.

When encountering a scalar element (numbers, strings, etc.), place its
payload in the parse tree at out and its location in temp.

When encountering the end of an array or object, compare the current
temp pointer to its value when beginning the array or object. The
difference is the length of the array or object. Now that we know the
length and each element reference, we can move the element references
out of temp and into out.

It may help to work through a simple example:

[[0],0]

The JSON text is 7 characters. Thus we have 7 words of parse tree to
work with:

[ ][ ][ ][ ][ ][ ][ ]
 ^                 ^
temp              out

Encountering the first [, we store the current value of temp (on the C stack).

Encountering the second [, we store the current value of temp (on the
C stack.)

At this point, nothing has been written to the parse tree.

Then we see the first zero and place its payload at out and its
type+location reference at temp.

[<Integer>:6][ ][ ][ ][ ][ ][0]
              ^           ^
             temp        out

Encountering the first ], we calculate the array length, 1, and move
the references from temp to out. We write
the new array’s location to temp, giving:

[<Array>:4][ ][ ][ ][1][<Integer>:2][0]
            ^     ^
           temp  out

We were careful to adjust the element references so they remain
relative to the array record.

We then encounter another zero and place its payload in out and
location in temp:

[<Array>:4][<Integer>:3][ ][0][1][<Integer>:2][0]
                         ^
                     temp out

Closing the outer array, we calculate its length (2), again move
the references from temp to out, and write the final array record.

[2][<Array>:4][<Integer>:3][0][1][<Integer>:2][0]
 ^
out

out now gives us the root of the parse tree.

Eliminating the Recursion

You may have noticed the previous implementation stores the start
address of each array or object on the C stack. This is liable to
overflow in the case of a JSON file consisting of N [s followed by N
]s for some large N. The JSON standard allows parsers to limit the
maximum depth they handle, but we can do better.

It’s possible to eliminate the recursion by storing the value of
temp into the temp side of the parse tree at the start of every
array or object. When reaching the end of an array or object, its
length is calculated, the record is written into out, and the
previous value of temp is restored. If the previous value of temp
is a special root marker, parsing is complete.

Does the parse tree, even during construction, have room for these
outer references?

First, let’s examine the same example but where we store a reference
to the outer ‘in-construction’ object in temp:

# seen [
[<Array>:<ROOT>][ ][ ][ ][ ][ ][ ]

# seen [[
[<Array>:<ROOT>][<Array>:0][ ][ ][ ][ ][ ]

# seen [[0
[<Array>:<ROOT>][<Array>:0][<Integer>:6][ ][ ][ ][0]

# seen [[0],
[<Array>:<ROOT>][<Array>:4][ ][ ][1][<Integer>:2][0]

# seen [[0],0
[<Array>:<ROOT>][<Array>:4][<Integer>:3][0][1][<Integer>:2][0]

# seen [[0],0]
[2][<Array>:4][<Integer>:3][0][1][<Integer>:2][0]

In this example, there is room. But will there always be?

An easy conceptualization is that the final size of an array
record will be 1+N records, including its length. The temporary array
storage is also 1+N, where we don’t yet know its length but we do have
a reference to the enclosing array or object. Thus, we have room for
outer references in the parse tree.

Actual Code

The result
is an implementation whose parsing loop is almost entirely inlined,
and on architectures with a reasonable number of registers (even
AMD64), very little spills to the stack.

sajson is available
under the MIT license, but at the time of this writing, it is
primarily a proof of concept. The API is not stable and it does not
support string escapes. It also needs a security review to guarantee
that no malformed inputs crash.

That said, if you give sajson a try, let me know how it goes.

sajson: Why the Parse Tree is Big Enough

Last week, I described a JSON parse tree data structure that, worst
case, requires N words for N characters of JSON text. I want to
explain the algorithm used to generate said parse tree, but first I
will describe the parse tree data structure in detail.
Simultaneously, I will show that the parse tree will fit in the worst
case.

Given that value types are stored in 3-bit tags, it’s intuitive that N
characters of JSON data requires N words in the parse tree. Let’s
consider the parse tree representation of each JSON type individually:

Strings

Strings are represented in the parse tree by two pointers: one to the
beginning of the string and one to the end. In the source text, these
correspond to the string’s quotes. The empty string, "",
is two characters and consumes two words in the parse tree.

struct String {
    size_t begin;
    size_t end;
};

Arrays

Arrays are represented by their length followed by a relative offset
to each element.

struct Array {
    size_t length;
    size_t element[length]; // type:offset
};

A zero-length array is 2 characters ([]) but consumes one word in the
parse tree.

An array of length 1, not including its element, is also 2 characters
([""]) but consumes two words in the parse tree.

An array of length N is N+1 characters, counting commas. The
representation described above consumes N+1 words in the parse tree,
so arrays fit.

Objects

Objects are represented by their length followed by an array of element records
each containing two pointers for the key string as well as a relative
offset to the element value.

struct ObjectElement {
    size_t key_begin;
    size_t key_end;
    size_t value; // type:offset
};

struct Object {
    size_t length;
    ObjectElement element[length];
};

The smallest object, {}, is 2 characters but its
representation in the parse tree is a single word.

Now consider an object of one element: {"":""}. Including the key
string (but not the value string!), the object is five characters in
the input text. Its parse tree representation is four words: the
length plus three for its element.

Each additional element in the input text adds four characters (a
comma, a colon, and two quotes) but requires only three words in the
parse tree.

Numbers: Integers

The only hat trick required to fit sajson’s parse tree
representation in input_text_length * sizeof(size_t)
bytes is representing integers differently than doubles.

struct Integer {
    intptr_t value;
};

That’s because, on 32-bit architectures, doubles are two words. If
single-digit numbers such as 0 consumed two words in the
parse tree, [0,0,0,0] would not fit.

Storing integers more compactly, we see that the smallest integers use
one character of input text and one word of parse tree structure.

It’s not worth the complexity, but the astute may notice that if we
limit integers to 29 bits, we don’t need to consume words in the parse
tree at all.

Numbers: Doubles

On 32-bit architectures, doubles are stored (unaligned) into the parse
tree.

struct Double {
    size_t first_half;
    size_t second_half;
};

On 64-bit architectures, a double consumes a single word.

The shortest JSON doubles are a single digit
followed by a decimal point followed by a single digit
(example: 0.0) or a single digit with a single-digit
exponent (example: 9e9). It’s clear that they fit into
the parse tree.

null, true, and false

null, true, and false contribute 4, 4, and 5 characters of input text,
respectively. They are represented simply by their type tags: no
parse tree data necessary.

We’ve now covered every JSON data type and shown that, in no case,
will the parse tree representation use more words than the JSON text
representation uses characters.

Single-Allocation JSON Parser

Over the holiday break, as mental exercise, I wrote a
single-allocation JSON parser, sajson. Why
single-allocation? To me, software that fits within a
precise resource budget, especially memory, is elegant. Most C or
C++ JSON parsers allocate memory per node and use hash tables to store
objects. Even if said parsers use efficient pool allocators or hash
table implementations, they miss the forest for the trees.

Dynamic memory allocation has disadvantages: fragmentation,
cache locality, and thread contention are the common arguments
against. But I focused on a different issue: what is the worst case
memory usage to parse, say, a 200 MB JSON file? With a JSON parser
that dynamically allocates, it’s challenging to prove the worst case
consumption.

Before we calculate the worst case memory consumption of a JSON
parser, let’s cover some basics.

Parsers convert input text, a stream of characters, into a data
structure or event stream suitable for reading or processing in some
way. In this instance, sajson is a non-streaming dom-style parser in
that it translates a complete buffer of characters into a contiguous
parse tree that supports both enumeration and random access.

JSON has seven data types. Three are unit types: null, true, and
false. Two are scalars: numbers and strings. Finally, arrays and
objects are composites: they contain references to other values. The
root element of a JSON document can only be an array or object.

sajson’s goal is to convert a stream of JSON text into a contiguous data
structure containing an enumerable and randomly-accessible parse tree.

My first attempt defined the parsed representation of each value as a type
enumeration followed by the type’s payload.

For example, the JSON text…

[null, 0, ["foo"]]

… would parse into…

<Array>
3 # length
5 # offset to first element
6 # offset to second element
9 # offset to third element
<Null>
<Number>
0 # first 32 bits of IEEE double value
0 # second 32 bits of value
<Array>
1 # length
3 # offset to first element
<String>
12 # offset into source document of string start
15 # offset into source document of string end

… where each line is a pointer-sized (aka size_t) value and <> represents named type constants.

For the above representation, the parse tree’s worst-case size is
sizeof(size_t) * input_length * 2. I won’t derive that
here, but the worst-case document is a list of single-digit numbers:

[0,0,0,0] # 9 characters

# 9*2 = 18 'slots' of structure
<Array>
4
6 # relative offset to first element
9
12
15
<Number>
0
0
<Number>
0
0
<Number>
0
0
<Number>
0
0

But we can do better!

Using a full size_t to store a 3-bit type constant is rather wasteful.
(Remember there are seven JSON types.) Because sajson only targets
32-bit and 64-bit architectures, each array or object element offset
has three bits to spare and thus can include the element’s type. The
document needs one bit to determine the type of the root element.
(Remember the root element must be an array or an object.)

A further optimization exists: rather than storing
all numbers as IEEE 64-bit doubles, we can add an extra type tag:
<Integer>. Single-digit JSON numbers must be integers, and thus
consume less structural storage.

Let’s consider the same example above with tagged element references,
where <tag>:offset delimits the tag from the offset.

[0,0,0,0] # 9 characters

# root bit determines root is array
4 # length of array
<Integer>:5
<Integer>:6
<Integer>:7
<Integer>:8
0
0
0
0

Let’s quickly check another example:

[[[[]]]] # 8 characters

# root bit determines root is array
1
<Array>:2
1
<Array>:2
1
<Array>:2
0

With the above changes, the parse tree size is cut in half! It now
fits in sizeof(size_t) * input_length.

Next time I’ll describe the challenges in building said parse tree
without a-priori knowledge of array length. Here’s a hint: imagine
you know the input text is 20 characters long. The first three
characters are "[[["