CppCon 2014: Embind and Emscripten: Blending C++11, JavaScript, and the Web Browser

On September 9th, at CppCon 2014, I am presenting on the design of Embind, a JavaScript-to-C++ binding API for Emscripten. Embind was authored by IMVU Inc. and myself.

See the session abstract at the conference site.

In this talk, you’ll learn where Embind fits in the overall space of solutions for connecting C++ and JavaScript, why generated code size is so important, how Embind works hard to keep code size small, and several of the C++11 techniques I learned for this project.

Emscripten, Callbacks, and C++11 Lambdas

The web is an asynchronous world built on asynchronous APIs. Thus, typical web applications are full of callbacks. onload and onerror for XMLHttpRequest, the callback argument to setTimeout, and messages from Web Workers are common examples.

Using asynchronous APIs is relatively natural in JavaScript. JavaScript is garbage-collected and supports anonymous functions that close over their scope.

However, when writing Emscripten-compiled C++ to interact with asynchronous web APIs, callbacks are less natural. First, JavaScript must call into a C++ interface. Embind works well for this purpose. Then, the C++ must respond to the callback appropriately. This post is about the latter component: writing efficient and clean asynchronous code in C++11.

I won’t go into detail here about how that works, but imagine you have an interface for fetching URLs with XMLHttpRequest.

class XHRCallback {
    virtual void onLoaded(const Response& response) = 0;
void fetch(const std::string& url, XHRCallback* callback);

Imagine Response is a struct that embodies the HTTP response and onLoaded runs when the XMLHttpRequest ‘load’ event fires.

To fetch data from the network, you would instantiate an implementation of the XHRCallback interface and pass it into the XHR object. I’m not going to cover in this article how to connect these interfaces up to JavaScript, but instead we will look at various various implementations of XHRCallback on the C++ side.

For the purposes of this example, let’s imagine we want to fetch some JSON, parse it, and store the result in a model.

Approach 1

A simple approach is to write an implementation of the interface that knows about the destination Model and updates it after parsing the body. Something like:

class MyXHRCallback : public XHRCallback {
    Model* model;

    MyXHRCallback(Model* model) : model(model) {}

    virtual void onLoaded(const Response& response) override {

void updateModelFromURL(const std::string& url, Model* model) {
    fetch(url, new MyXHRCallback(model));

This is quite doable, but it’s a real pain to write a new class instance, field list, and constructor for every callback.

What if we tried to simplify the API with C++11 lambdas?

Approach 2

class LambdaXHRCallback : public XHRCallback {
    std::function<void(const Response&)> onLoadedFunction;
    virtual void onLoaded(const Response& response) override {
        if (onLoadedFunction) {

Above is boilerplate per interface. Below is use.

void updateModelFromURL(const std::string& url, Model* model) {
    auto callback = new LambdaXHRCallback;
    callback->onLoaded = [model](const Response& response) {
    fetch(url, callback);

Ignoring the implementation of LambdaXHRCallback, the API’s a little cleaner to use. This approach requires backing the callback interface with an implementation that delegates to a std::function. The std::function can be bound to a local lambda, keeping the callback logic lexically near the code issuing the request.

From a clarity perspective, this is an improvement. However, because Emscripten requires that your customers download and parse the entire program during page load (in some browsers, parsing happens on every pageload!), code size is a huge deal. Even code in rarely-used code paths is worth paying attention to.

std::function, being implemented with its own abstract “implementation-knowledge-erasing” interface that is allocated upon initialization or assignment, tends to result in rather fat code. The default 16-byte backing storage in 32-bit libc++ doesn’t help either.

Can we achieve clear asynchronous code without paying the std::function penalty? Yes, in fact!

Approach 3

template<typename OnLoad>
class LambdaXHRCallback : public XHRCallback {
    // perfect forwarding
    LambdaXHRCallback(OnLoad&& onLoad)
    : onLoad_(std::forward<OnLoad>(onLoad))

    virtual void onLoaded(const Response& response) override {
    OnLoad onLoad_;

// this function exists so OnLoad’s type can be inferred,
// as lambdas have anonymous types
template<typename OnLoad>
LambdaXHRCallback<OnLoad>* makeXHRCallback(OnLoad&& onLoad) {
    return new LambdaXHRCallback(std::forward<OnLoad>(onLoad));

Above is boilerplate per interface. Below is use.

void updateModelFromURL(const std::string& url, Model* model) {
    fetch(url, makeXHRCallback(
        [model](const Response& response) {

But… but… there are templates here, how is that any better than std::function? Well, first of all, now we only have one virtual call: the XHRCallback interface itself. Previously, we would have a virtual call into LambdaXHRCallback and then again through the std::function.

Second, in C++11, lambdas are syntax sugar for an anonymous class type with an operator(). Since the lambda’s immediately given to the LambdaXHRCallback template and stored directly as a member variable, in practice, the types are merged during link-time optimization.

I ported a dozen or so network callbacks from std::function to the template lambda implementation and saw a 39 KB reduction in the size of the resulting minified JavaScript.

I won’t go so far as to recommend avoiding std::function in Emscripten projects, but I would suggest asking whether there are better ways to accomplish your goals.

Multiplatform C++ on the Web with Emscripten

[This post is also available at the IMVU engineering blog.]

At GDC 2013, IMVU shared the analysis that led us to using Emscripten as a key component of our technology strategy to bring our rich 3D virtual goods catalog to new platforms, including web browsers.

Here are the slides from said presentation.

Benchmarking JSON Parsing: Emscripten vs. Native

This post concludes my obsession with JSON parsing. In fact, the entire reason I wrote a JSON parser was to show these graphs. I wanted to see whether I could write a JSON parser faster than any other when run in Emscripten. As vjson is typically faster, I did not succeed unless I requalify my goal as writing the fastest-in-Emscripten JSON parser with a useful parse tree.

This benchmark’s code is on GitHub. I encourage you to reproduce my results and search for flaws.

All benchmarks were run on a 2010 Macbook Pro, 2.53 GHz Core i5, 8 GB 1067 MHz DDR3.

Native vs. Emscripten

First, let’s compare native JSON parsing performance (clang 3.1, -O2) with both stable and nightly versions of Firefox and Chrome.

Two things are immediately clear from this graph. First, native code is still 5-10x faster than Emscripten/JavaScript. Second, yajl and jansson are dramatically slower than rapidjson, sajson, and vjson. Native yajl and jansson are even slower than Emscripten’d sajson. Henceforth I will not include them.

Looking closely at the browser results, a third conclusion is clear. Firefox runs Emscripten’d code much faster than Chrome.

Finally, sajson consistently performs better than rapidjson in my Emscripten tests. And vjson always parses the fastest. I believe this is because Emscripten and browser JITs punish larger code sizes.

The previous graph only shows parse rates by browser and parser for a single file. Next let’s look at parse rates by file.

Yep, native code consistently wins. At this point I want to dig into differences between the browsers, so I will show the same graph but without native code.

Firefox vs. Chrome

Not only is Firefox consistently faster than Chrome, it’s faster by a factor of 2-5x!

Finally, here the same graph but normalized against Firefox 18.

If I were a Firefox JS developer, I’d be feeling pretty proud right now, assuming this experiment holds up to scrutiny. Even so, these results match my experience with Bullet/Emscripten in Chrome: Chrome takes a very long time to stabilize its JIT, and I’ve even seen it fail to stabilize, constantly deopting and then reoptimizing code. In contrast, Firefox may take longer to JIT up front, but performance is smooth afterwards.

Further work is necessary to test the hypothesis that code size is the biggest predictor of Emscripten performance.

Preemptive answers to predicted questions follow:

Well, duh. You shouldn’t use an Emscripten’d JSON parser. You should use the browser’s built-in JSON.parse function.

This isn’t about parsing JSON. This is about seeing whether the web can compete with native code under common workloads. After all, Mozillians have been claiming JavaScript is or will be fast enough to replace native apps. If parsing JSON through a blessed browser API is the only way to do it quickly, then developers are disempowered. What if a new format comes along? Must developers wait on the browser vendors? The web has to be better than that.

Shouldn’t you have compiled the Emscripten code with -fno-exceptions?

Yes. Oops. That said, after generating the graphs, I did recompile with -fno-exceptions and it made no difference to the benchmark results in either Chrome or Firefox.

Emscripten Results: Firefox 19 shows dramatic improvement

Last time, we looked at Emscripten’s performance with current JS JITs on an in-order Atom core and found a penalty relative to out-of-order cores.

However, I told @js_dev I’d give updated numbers on a more typical out-of-order x86 core like my 2010 MacBook Pro’s i5.

There are a couple interesting things here: Firefox 19 shows substantial Emscripten performance improvements over Firefox 17, which is even still on par with hand-written JavaScript. While JavaScript JITs are still an order of magnitude away from native code performance, Emscripten’s performance meets or exceeds the performance of hand-written JavaScript. Progress!

The machine is a 2010 Macbook Pro, Core i5 2.53 GHz, OS X 10.6.

For each compiler, I compiled with -O0, -O1, -O2, -O3, and picked the best result.

Language Compiler Variant Vertex Rate Slowdown
C++ clang -O2 SSE 100142197 1
C++ gcc -O3 SSE 93109180 1.08
C++ gcc -O3 scalar 60398333 1.66
C++ clang -O2 scalar 58324308 1.72
JavaScript Chrome 23 untyped 9510489 10.5
Emscripten -O3 Aurora 19.0a2 scalar 7666000 13.1
Emscripten -O3 Firefox 17 scalar 6044000 16.6
JavaScript Chrome 23 typed arrays 5890000 17
Emscripten -O3 Chrome 25.0 canary scalar 5733706 17.5
JavaScript Firefox 17 untyped 5264735 19
JavaScript Firefox 17 typed arrays 5240000 19.1
Emscripten -O2 Chrome 23 scalar 4586165 21.8
Emscripten -O1 nodejs 0.8.10 scalar 4453109 22.5
Emscripten -O2 nodejs 0.8.10 scalar 1483406 67.5
Emscripten -O3 nodejs 0.8.10 scalar 668796 150

Here are the results for various Emscripten optimization levels:

Browser Compilation Level Vertex Rate
Firefox 17 emscripten -O0 2451509
Firefox 17 emscripten -O1 4080000
Firefox 17 emscripten -O2 5146000
Firefox 17 emscripten -O3 6044000
Chrome 23 emscripten -O0 1229754
Chrome 23 emscripten -O1 4152339
Chrome 23 emscripten -O2 4586165
Chrome 23 emscripten -O3 465162
Aurora 19.0a2 emscripten -O0 2062762
Aurora 19.0a2 emscripten -O1 4900000
Aurora 19.0a2 emscripten -O2 6214757
Aurora 19.0a2 emscripten -O3 7666000
Chrome 25.0 canary emscripten -O0 3001399
Chrome 25.0 canary emscripten -O1 4410235
Chrome 25.0 canary emscripten -O2 5482000
Chrome 25.0 canary emscripten -O3 5733706

I updated the benchmark to automate compiling and running the native and node.js builds.

JavaScript, Emscripten, and the Atom D2700

Lately I’ve been doing some work with Emscripten. As predicted, the quality of Emscripten’s generated code is improving and JITs are learning to understand its generated code. I have high hopes for asm.js, a formalization of high-performance, low-level JavaScript. I now believe it’s conceivable that Emscripten could approach the same level of performance as PNaCl, though whether that happens remains to be seen.

However, having a rough understanding of how today’s JavaScript JITs work, I’ve always wondered whether Emscripten-generated code would be especially penalized relative to native code on an in-order core like Intel Atom. Having recently built an Intel Atom home server, I figured I’d update my recent Emscripten skinning benchmark results and find out.

First I’ll describe the hardware. The CPU is an Atom D2700 on the Intel D2700DC board. 1066 MHz DDR3 memory. Two cores hyperthreaded. Running Ubuntu 12.04 Server. Firefox and Chromium packages are stock. Node.js and clang 3.1 are x64 Linux binaries downloaded from their respective websites. Emscripten is commit 26250471b46a68204711f037f33790bfb4ba37c7 in the master branch.

Now the results. Remember there are three JavaScript implementations: hand-written JS with untyped arrays and objects “untyped”, hand-written JS with typed arrays “typed arrays”, and Emscripten-compiled C++ “scalar”. Emscripten’s compiler was invoked with -O1. I saw significant performance drop-offs with -O2 and -O3.

Language Compiler Variant Vertex Rate Slowdown
C++ gcc 4.6.3 -O3 SSE 24040000 1
C++ clang 3.1 -O3 SSE 22530000 1.07
C++ gcc 4.6.3 -O3 scalar 18730000 1.28
C++ clang 3.1 -O3 scalar 13040000 1.84
JavaScript Chromium 20.0 untyped 3150000 7.63
JavaScript Firefox 17 typed arrays 2437562 9.86
JavaScript Firefox 17 untyped 1084577 22.2
Emscripten Firefox 17 scalar 944333 25.5
JavaScript Chromium 20.0 typed arrays 807577 29.8
Emscripten node 0.8.14 scalar 679802 35.4
Emscripten Chromium 20.0 scalar 677966 35.5

Based on the previous benchmark results and my recent experience with Emscripten, it appears that JavaScript JITted code indeed has a penalty relative native code on in-order cores, or at least the Atom D2700.

Next time I hope to update these benchmarks on a high-end desktop CPU.

As always, if you’d like to reproduce these results or question them, the code is available on my github.

Digging into JavaScript Performance, Part 2

UPDATE. After I posted these numbers, Alon Zakai, Emscripten’s author, pointed out options for generating optimized JavaScript. I reran my benchmarks; check out the updated table below and the script used to generate the new results.

At the beginning of the year, I tried to justify my claim that JavaScript has a long way to go before it can compete with the performance of native code.

Well, 10 months have passed. WebGL is catching on, Native Client has been launched, Unreal Engine 3 targets Flash 11, and Crytek has announced they might target Flash 11 too. Exciting times!

On the GPU front, we’re in a good place. With WebGL, iOS, and Flash 11 all roughly exposing shader model 2.0, it’s not a ton of work to target all of the above. Even on the desktop you can’t assume higher than shader model 2.0: the Intel GMA 950 is still at the top.

However, shader model 2.0 isn’t general enough to offload all of your compute-intensive workloads to the GPU. With 16 vertex attributes and no vertex texture fetch, you simply can’t get enough data into your vertex shaders do to everything you need, e.g. blending morph targets.

Thus, for the foreseeable future, we’ll need to write fast CPU code that can run on the web, mobile devices, and the desktop. Today, that means at least JavaScript and a native language like C++. And, because Microsoft has not implemented WebGL, the Firefox and Chrome WebGL blacklists are so strict, and no major browsers fall back on software, you probably care about targeting Flash 11 too. (It does have a software fallback!) If you care about Flash 11, then your code had better target ActionScript 3 / AVM2 too.

How can we target native platforms, the web, and Flash at the same time?

Native platforms are easy: C++ is well-supported on Windows, Mac, iOS, and Android. SSE2 is ubiquitous on x86, ARM NEON is widely available, and both have high-quality intrinsics-based implementations.

As for Flash… I’m just counting on Adobe Alchemy to ship.

On the web, you have two choices. Write your code in C++ and cross-compile it to JavaScript with Emscripten or write it in JavaScript and run via your native JavaScript engine. Ideally, cross-compiling C++ to JS via Emscripten would be as fast as writing your code in JavaScript. If it is, then targeting all platforms is easy: just use C++ and the browsers will do as well as they would with native JavaScript.

Over the last two evenings, while weathering a dust storm, I set about updating my skeletal animation benchmark results: for math-heavy code, how does JavaScript compare to C++ today? And how does Emscripten compare to hand-written JavaScript?

If you’d like, take a look at the raw results.

Language Compiler Variant Vertex Rate Slowdown
C++ clang 2.9 SSE 101580000 1
C++ gcc 4.2 SSE 96420454 1.05
C++ gcc 4.2 scalar 63355501 1.6
C++ clang 2.9 scalar 62928175 1.61
JavaScript Chrome 15 untyped 10210000 9.95
JavaScript Firefox 7 typed arrays 8401598 12.1
JavaScript Chrome 15 typed arrays 5790000 17.5
Emscripten Chrome 15 scalar 5184815 19.6
JavaScript Firefox 7 untyped 5104895 19.9
JavaScript Firefox 9a2 untyped 2005988 50.6
JavaScript Firefox 9a2 typed arrays 1932271 52.6
Emscripten Firefox 9a2 scalar 734126 138
Emscripten Firefox 7 scalar 729270 139


  • JavaScript is still a factor of 10-20 away from well-written native code. Adding SIMD support to JavaScript will help, but obviously that’s not the whole story…
  • It’s bizarre that Chrome and Firefox disagree on whether typed arrays or not are faster.
  • Firefox 9 clearly has performance issues that need to be worked out. I wanted to benchmark its type inference capabilities.
  • Emscripten… ouch :( I wish it were even comparable to hand-written JavaScript, but it’s another factor of 10-20 slower…
  • Emscripten on Chrome 15 is within a factor of two of hand-written JavaScript. I think that means you can target all platforms with C++, because hand-written JavaScript won’t be that much faster than cross-compiled C++.
  • Emscripten on Firefox 7 and 9 still has issues, but Alon Zakai informs me that the trunk version of SpiderMonkey is much faster.

In the future, I’d love to run the same test on Flash 11 / Alchemy and Native Client but the former hasn’t shipped and the latter remains a small market.

One final note: it’s very possible my test methodology is screwed up, my benchmarks are wrong, or I suck at copy/pasting numbers. Science should be reproducible: please try to reproduce these results yourself!

In Defense of Language Democracy (Or: Why the Browser Needs a Virtual Machine)

Years ago, Mark Hammond did a bunch of work to get Python running inside Mozilla’s script tags. Parts of Mozilla are ostensibly designed to be language-independent, even. Unfortunately, even if Mozilla had succeeded at shipping multiple language implementations, it’s unlikely other browser vendors would have followed suit. It’s just not logistically feasible to have all browsers gate and care for the set of interesting languages on the client.

I can hear you asking “Why do I care about Python in the browser? Or C++? Or OCaml? JavaScript is a great language.” I agree! JavaScript is a great language. Given the extremely short timeframe and immense political pressure, I’m thrilled we ended up with something as capable as JavaScript.

Nonetheless, fair competition benefits everyone. Take a look at what’s happened in the web server space in the last few years: Ruby on Rails. Django. Node.js. nginx. Tornado. Twisted. AppEngine. MochiWeb. HipHop-PHP. ASP.NET MVC. A proliferation of interesting datastores: memcache, redis, riak, etc. That’s an incredible amount of innovation in a short period of time.

Now let’s go through the same exercise, but on the client. jQuery, YUI, fast JavaScript JITs, CSS3, CoffeeScript, proliferation of standards-compliant browsers, some amount of HTML5… Maybe ubiquitous usage of Flash video? These advancements are significant, but it’s clear the front-end stack is changing much more slowly than the back-end.

Why is the back-end evolving faster than the front-end?

When building an application backend, even atop a virtualized hosting provider such as EC2, you are given approximately raw access to a machine: x86 instruction set, sockets, virtual memory, operating system APIs, and all. Any software that runs on that machine competes at the same level. You can use Python or Ruby or C++ or some combination thereof. If Redis wants to innovate with new memory management schemes, nothing is stopping it. This ecosystem democratized – nay, meritocratized – innovation.

On the front-end, the problem boils down to the fact that JavaScript is built atop but does not expose the capabilities of the underlying hardware, meaning browsers and JavaScript implementations are inherently more capable than anything built atop them.

Of course, any client-side technology is going to rev slower simply because it’s hard to get people to update their bits. Also, users decide which client bits they like best, whether they be Internet Explorer, Chrome, or Firefox. Now the technology-takes-time-to-gain-ubiquity problem has a new dimension: each browser vendor must also decide to implement this technology in a compatible way. It took years for even JavaScript to standardize across browsers.

However, if we could instead standardize the web on a performant and safe VM such as CLR, JVM, or LLVM, including explicit memory layout and allocation and access to extra cores and the GPU, JavaScript becomes a choice rather than a mandate.

This point of view depends on my prediction that JavaScript will not become competitive with native code, but not everyone agrees. If JavaScript does eventually match native code, than I’d expect the browser itself to be written in it. It’s impossible for me to claim that JavaScript will never match native code, but the sustained success of C++ in systems programming, games, and high-performance computing is a testament to the value of systems languages.

Native Client, however, gives web developers the opportunity to write code within 5-10% of native code performance, in whatever language they want, without losing the safety and convenience of the web. You can write web applications that leverage multiple cores, and with WebGL, you can harness dedicated graphics hardware as well. Native Client does restrict access to operating system APIs, but I expect APIs to evolve reasonably quickly.

Let’s take a particular example: the HTML5 video tag. Native Client could have sidestepped the entire which-video-codec-should-we-standardize spat between Mozilla, Google, Apple, and Microsoft by allowing each site to choose the codec it prefers. YouTube could safely deploy whatever codecs it wanted, and even evolve them over time.

With Native Client, we could share Python code between the front-end and the back-end. We could use languages that support weak references. We could implement IMVU’s asynchronous task system. We could embed new JavaScript interpreters in old browsers.

Native Client is not the only option here. The JVM and CLR are other portable and performant VMs that have seen considerable language innovation while approximating native code performance.

A standardized, performant, and safe VM in the browser would increase the strength of the open web versus native app stores and their arbitrary technology limitations.

Finally, I’d like to thank Alon Zakai (author of Emscripten), Mike Shaver, and Chris Saari for engaging in open, honest discussion. I hope this public discourse leads to a better web. Either way, I hope this is my last post on this topic. :)

Native Client is Widely Misunderstood (And What Google Should Do About It)

Wow. My recent post about why Mozilla should adopt Native Client stirred up quite a storm. Some folks don’t believe the web needs high-performance applications. Some are happy with whatever APIs browsers expose. I disagree with these points, but I can respect them.

Most surprisingly, several respondents had simply untrue objections to Native Client, so I’d like to clear up their misconceptions. Then I will make recommendations to the Native Client team on how to fix their perception problems.

If you want to spend some minutes and learn about Native Client and LLVM from the horse’s mouth, watch this video.

Misconceptions about Native Client

Native Client implies x86

False. Originally, Native Client was positioned as an x86 sandbox technology, but now it has a clear LLVM story, with x86-32, x86-64, and partially-implemented ARM backends. Portability is a key benefit of the web, and Google understands this.

Native Client is complicated

True, it’s certainly not a trivial amount of code. But compare the amount of code in NativeClient vs. Mozilla’s JavaScript engine:

$ wc -l native_client/src/**/*.{c,h,cc}
155082 total
$ find mozilla-central/js/src -path '*tests*' -prune -o \( -iname '*.c' -o -iname '*.cc' -o -iname '*.h' -o -iname '*.cpp' \) -print0 | wc -l --files0-from=-
363471 total

NativeClient is at least on the same order of complexity as a modern JavaScript engine, and since it already provides performance within 5% of native code, I’d guess it’s less susceptible to change.

Native Client / LLVM is not an open standard

I empathize with this concern, but Flash isn’t an open standard and it sees wide adoption. The difference between Flash and Native Client is that Native Client / LLVM is open source and could easily become an open standard.

Native Client is insecure

Native Client was designed to be a secure x86 sandbox. Under the assumption that its basic security model is sound, the question then becomes “how large is the attack surface and how likely is it to be broken?” Given the amount of code in a modern web browser and JavaScript JIT, I don’t see how Native Client is any worse.

With a little more work, JavaScript will perform at the same level as native code

I’m not informed or involved enough to claim JavaScript can never be as fast as native code. However, I have my doubts. A friend was working on a Monte Carlo Go AI, and he initially wrote his algorithm in JavaScript. Monte Carlo requires simulating a large number of game states, and a naïve port of his JavaScript to C++ gave a 100x performance improvement.

Check out my skeletal animation benchmark, where the JavaScript JITs need another 10x to compete with native code.

Even if JITs can match native code in some benchmarks (and I hope they do), performance across browsers will depend on the particulars of the JIT implementation. Native Client, at least for pure computation, would perform the same in every browser.

We can simply compile languages like Haskell, Python, and C to optimized JavaScript and let the JIT sort it out.

There are some attempts to use JavaScript as a backend for other language implementations, but they rarely perform well. For example, a CPython compiled to JavaScript via LLVM/Emscripten runs about 30x slower than a native build in Chrome, and 200x slower in Firefox 4 beta 8.

I’ve heard the argument for an RPython-like statically-analyzable subset of JavaScript that browsers could run very efficiently. This subset could operate as a defacto bytecode, and Emscripten could compile LLVM to it with minimal performance loss. It’s possible this could work, but directly exposing LLVM seems more fruitful.

Red Herring Arguments

JavaScript is easier to develop with than native languages

Sure, but that doesn’t mean native languages don’t have a purpose. My hypothesis is that there are problems for which JavaScript is not and will not be suited, and that exposing the native power of the machine is better for application developers, and thus the web.

Binaries are obscure

Minified JS isn’t human-readable either, but machines can reconstruct both. Drdaemon nails it in his comment


“If you want native performance, just download software or install a plug-in!”

While this sentiment reflects today’s reality, it doesn’t reflect trends on the web. Web applications continue to supplant desktop applications. Google Docs, Creately, Pivotal Tracker, Gmail, Mockingbird, and all of the games on Facebook are examples where I would have used a desktop application in the past. It seems that, whenever browsers provide new capabilities, applications consume them. Why would that trend stop now?

Recommendations to the Native Client team

  1. Get a move on! Enable it by default! More flashy demos!
  2. Reposition Native Client as a portable technology and make sure it’s clear that LLVM is key to its strategy.

Finally, NativeClient is still new. I expect it will be some time before it’s solid enough to rely on for production use. That said, it has the potential to disrupt the desktop operating system and I’m excited for a future where all software is web-based.