Digging into JavaScript Performance, Part 2

UPDATE. After I posted these numbers, Alon Zakai, Emscripten’s author, pointed out options for generating optimized JavaScript. I reran my benchmarks; check out the updated table below and the script used to generate the new results.

At the beginning of the year, I tried to justify my claim that JavaScript has a long way to go before it can compete with the performance of native code.

Well, 10 months have passed. WebGL is catching on, Native Client has been launched, Unreal Engine 3 targets Flash 11, and Crytek has announced they might target Flash 11 too. Exciting times!

On the GPU front, we’re in a good place. With WebGL, iOS, and Flash 11 all roughly exposing shader model 2.0, it’s not a ton of work to target all of the above. Even on the desktop you can’t assume higher than shader model 2.0: the Intel GMA 950 is still at the top.

However, shader model 2.0 isn’t general enough to offload all of your compute-intensive workloads to the GPU. With 16 vertex attributes and no vertex texture fetch, you simply can’t get enough data into your vertex shaders do to everything you need, e.g. blending morph targets.

Thus, for the foreseeable future, we’ll need to write fast CPU code that can run on the web, mobile devices, and the desktop. Today, that means at least JavaScript and a native language like C++. And, because Microsoft has not implemented WebGL, the Firefox and Chrome WebGL blacklists are so strict, and no major browsers fall back on software, you probably care about targeting Flash 11 too. (It does have a software fallback!) If you care about Flash 11, then your code had better target ActionScript 3 / AVM2 too.

How can we target native platforms, the web, and Flash at the same time?

Native platforms are easy: C++ is well-supported on Windows, Mac, iOS, and Android. SSE2 is ubiquitous on x86, ARM NEON is widely available, and both have high-quality intrinsics-based implementations.

As for Flash… I’m just counting on Adobe Alchemy to ship.

On the web, you have two choices. Write your code in C++ and cross-compile it to JavaScript with Emscripten or write it in JavaScript and run via your native JavaScript engine. Ideally, cross-compiling C++ to JS via Emscripten would be as fast as writing your code in JavaScript. If it is, then targeting all platforms is easy: just use C++ and the browsers will do as well as they would with native JavaScript.

Over the last two evenings, while weathering a dust storm, I set about updating my skeletal animation benchmark results: for math-heavy code, how does JavaScript compare to C++ today? And how does Emscripten compare to hand-written JavaScript?

If you’d like, take a look at the raw results.

Language Compiler Variant Vertex Rate Slowdown
C++ clang 2.9 SSE 101580000 1
C++ gcc 4.2 SSE 96420454 1.05
C++ gcc 4.2 scalar 63355501 1.6
C++ clang 2.9 scalar 62928175 1.61
JavaScript Chrome 15 untyped 10210000 9.95
JavaScript Firefox 7 typed arrays 8401598 12.1
JavaScript Chrome 15 typed arrays 5790000 17.5
Emscripten Chrome 15 scalar 5184815 19.6
JavaScript Firefox 7 untyped 5104895 19.9
JavaScript Firefox 9a2 untyped 2005988 50.6
JavaScript Firefox 9a2 typed arrays 1932271 52.6
Emscripten Firefox 9a2 scalar 734126 138
Emscripten Firefox 7 scalar 729270 139


  • JavaScript is still a factor of 10-20 away from well-written native code. Adding SIMD support to JavaScript will help, but obviously that’s not the whole story…
  • It’s bizarre that Chrome and Firefox disagree on whether typed arrays or not are faster.
  • Firefox 9 clearly has performance issues that need to be worked out. I wanted to benchmark its type inference capabilities.
  • Emscripten… ouch :( I wish it were even comparable to hand-written JavaScript, but it’s another factor of 10-20 slower…
  • Emscripten on Chrome 15 is within a factor of two of hand-written JavaScript. I think that means you can target all platforms with C++, because hand-written JavaScript won’t be that much faster than cross-compiled C++.
  • Emscripten on Firefox 7 and 9 still has issues, but Alon Zakai informs me that the trunk version of SpiderMonkey is much faster.

In the future, I’d love to run the same test on Flash 11 / Alchemy and Native Client but the former hasn’t shipped and the latter remains a small market.

One final note: it’s very possible my test methodology is screwed up, my benchmarks are wrong, or I suck at copy/pasting numbers. Science should be reproducible: please try to reproduce these results yourself!

Reporting Crashes in IMVU: Who threw that C++ exception?

It’s not often that I get to write about recent work. Most of the techniques in this series were implemented at IMVU years ago. A few weeks ago, however, a common C++ exception (tr1::bad_weak_ptr) starting intermittently causing crashes in the wild. This exception can be thrown in a variety of circumstances, so we had no clue which code was problematic.

We could have modified tr1::bad_weak_ptr so its constructor fetched a CallStack and returned it from tr1::bad_weak_ptr::what(), but fetching a CallStack is not terribly cheap, especially in such a frequently-thrown-and-caught exception. Ideally, we’d only grab a stack after we’ve determined it’s a crash (in the top-level crash handler).

Allow me to illustrate:

void main_function(/*arguments*/) {
    try {
        try {
            // We don't want to grab the call stack here, because
            // we'll catch the exception soon.
        catch (const std::exception& e) {
            // Yup, exception is fine.  Just swallow and
            // do something else.
    catch (const std::exception& e) {
        // Oh no! fallback_algorithm() failed.
        // Grab a stack trace now.

Almost! Unfortunately, the call stack generated in the catch clause doesn’t contain fallback_algorithm. It starts with main_function, because the stack has already been unwound by the time the catch clause runs.

Remember the structure of the stack:

Example Stack

We can use the ebp register, which points to the current stack frame, to walk and record the current call stack. [ebp+4] is the caller’s address, [[ebp]+4] is the caller’s caller, [[[ebp]]+4] is the caller’s caller’s caller, and so on.

What can we do with this information? Slava Oks at Microsoft gives the clues we need. When you type throw MyException(), a temporary MyException object is constructed at the bottom of the stack and passed into the catch clause by reference or by value (as a copy deeper on the stack).

Before the catch clause runs, objects on the stack between the thrower and the catcher are destructed, and ebp is pointed at the catcher’s stack frame (so the catch clause can access parameters and local variables).

From within the outer catch block, here is the stack, ebp, and esp:

Stack From Catch Clause

Notice that, every time an exception is caught the linked list of stack frames is truncated. When an exception is caught, ebp is reset to the stack frame of the catcher, destroying our link to the thrower’s stack.

But there’s useful information between ebp and esp! We just need to search for it. We can find who threw the exception with this simple algorithm:

	For every possible pointer between ebp and esp,
	find the deepest pointer p,
	where p might be a frame pointer.
	(That is, where walking p eventually leads to ebp.)

Or you can just use our implementation.

With this in mind, let’s rewrite our example’s error handling:

void main_function(/*arguments*/) {
    try {
        try {
        catch (const std::exception& e) {
            // that's okay, just swallow and
            // do something else.
    catch (const std::exception& e) {
        // oh no! fallback_algorithm() failed.
        // grab a stack trace - including thrower!
        Context ctx;
        ctx.ebp = findDeepestFrame(ctx.ebp, ctx.esp);

Bingo, fallback_algorithm appears in the stack:


Now we’ll have no problems finding the source of C++ exceptions!

You Won’t Learn This in School: Disabling Kernel Functions in Your Process

Detecting and reporting unhandled exceptions with SetUnhandledExceptionFilter seemed logical, and, in fact, it worked… for a while. Eventually, we started to notice failures that should have been reported as a last-chance exception but weren’t. After much investigation, we discovered that both Direct3D and Flash were installing their own unhandled exception filters! Worse, they were fighting over it, installing their handlers several times per second! In practice, this meant our last-chance crash reports were rarely generated, convincing us our crash metrics were better than they were. (Bad, bad libraries!)

It’s pretty ridiculous that we had to solve this problem, but, as Avery Lee says, “Just because it is not your fault does not mean it is not your problem.”

The obvious solution is to join the fray, calling SetUnhandledExceptionFilter every frame, right? How about we try something a bit more reliable… I hate implementing solutions that have obvious flaws. Thus, we chose to disable (with code modification) the SetUnhandledExceptionFilter function immediately after installing our own handler. When Direct3D and Flash try to call it, their requests will be ignored, leaving our exception handler installed.

Code modification… isn’t that scary? With a bit of knowledge and defensive programming, it’s not that bad. In fact, I’ll show you the code up front:

// If this doesn't make sense, skip the code and come back!

void lockUnhandledExceptionFilter() {
    HMODULE kernel32 = LoadLibraryA("kernel32.dll");

    if (FARPROC gpaSetUnhandledExceptionFilter = GetProcAddress(kernel32, "SetUnhandledExceptionFilter")) {
        unsigned char expected_code[] = {
            0x8B, 0xFF, // mov edi,edi
            0x55,       // push ebp
            0x8B, 0xEC, // mov ebp,esp

        // only replace code we expect
        if (memcmp(expected_code, gpaSetUnhandledExceptionFilter, sizeof(expected_code)) == 0) {
            unsigned char new_code[] = {
                0x33, 0xC0,       // xor eax,eax
                0xC2, 0x04, 0x00, // ret 4

            BOOST_STATIC_ASSERT(sizeof(expected_code) == sizeof(new_code));

            DWORD old_protect;
            if (VirtualProtect(gpaSetUnhandledExceptionFilter, sizeof(new_code), PAGE_EXECUTE_READWRITE, &old_protect)) {
                CopyMemory(gpaSetUnhandledExceptionFilter, new_code, sizeof(new_code));

                DWORD dummy;
                VirtualProtect(gpaSetUnhandledExceptionFilter, sizeof(new_code), old_protect, &dummy);

                FlushInstructionCache(GetCurrentProcess(), gpaSetUnhandledExceptionFilter, sizeof(new_code));

If that’s obvious to you, then great: We’re hiring!

Otherwise, here is an overview:

Use GetProcAddress to grab the real address of SetUnhandledExceptionFilter. (If you just type &SetUnhandledExceptionFilter you’ll get the relocatable import thunk, not the actual SetUnhandledExceptionFilter function.)

Most Windows functions begin with five bytes of prologue:

mov edi, edi ; 2 bytes for hotpatching support
push ebp     ; stack frame
mov ebp, esp ; stack frame (con't)

We want to replace those five bytes with return 0;. Remember that __stdcall functions return values in the eax register. We want to replace the above code with:

xor eax, eax ; eax = 0
ret 4        ; pops 4 bytes (arg) and returns

Also five bytes! How convenient! Before we replace the prologue, we verify that the first five bytes match our expectations. (If not, we can’t feel comfortable about the effects of the code replacement.) The VirtualProtect and FlushInstructionCache calls are standard fare for code modification.

After implementing this, it’s worth stepping through the assembly in a debugger to verify that SetUnhandledExceptionFilter no longer has any effect. (If you really enjoy writing unit tests, it’s definitely possible to unit test the desired behavior. I’ll leave that as an exercise for the reader.)

Finally, our last-chance exception reporting actually works!

Reporting Crashes in IMVU: C++ Call Stacks

Last time, we talked about including contextual information to help us
actually fix crashes that happen in the field. Minidumps are a great
way to easily save a snapshot of the most important parts of a running
(or crashed) process, but it’s often useful to understand the
low-level mechanics of a C++ call stack (on x86). Given some basic
principles about function calls, we will derive the implementation
of code to walk a call stack.

C++ function call stack entries are stored on the x86 stack, which
grows downward in memory. That is, pushing on the stack subtracts
from the stack pointer. The ESP register points to the
most-recently-written item on the stack; thus, push eax
is equivalent to:

sub esp, 4
mov [esp], eax

Let’s say we’re calling a function:

int __stdcall foo(int x, int y)

The __stdcall
calling convention pushes arguments onto the stack from right to left
and returns the result in the EAX register, so calling
foo(1, 2) generates this code:

push 2
push 1
call foo
; result in eax

If you aren’t familiar with assembly, I know this is a lot to absorb,
but bear with me; we’re almost there. We haven’t seen the
call instruction before. It pushes the EIP
register, which is the return address from the called function onto
the stack and then jumps to the target function.
If we didn’t store the instruction pointer, the called function would
not know where to return when it was done.

The final piece of information we need to construct a C++ call stack is
that functions live in memory, functions have names, and thus sections
of memory have names. If we can get access to a mapping of memory
addresses to function names (say, with the /MAP
linker option
), and we can read instruction pointers up the call
stack, we can generate a symbolic stack trace.

How do we read the instruction pointers up the call stack?
Unfortunately, just knowing the return address from the current
function is not enough. How do you know the location of the caller’s
caller? Without extra information, you don’t. Fortunately, most
functions have that information in the form of a function prologue:

push ebp
mov ebp, esp

and epilogue:

mov esp, ebp
pop ebp

These bits of code appear at the beginning and end of every function, allowing you
to use the EBP register as the “current stack frame”.
Function arguments are always accessed at positive offsets from EBP,
and locals at negative offsets:

; int foo(int x, int y)
; ...
[EBP+12] = y argument
[EBP+8]  = x argument
[EBP+4]  = return address (set by call instruction)
[EBP]    = previous stack frame
[EBP-4]  = local variable 1
[EBP-8]  = local variable 2
; ...

Look! For any stack frame EBP, the caller’s address is
at [EBP+4] and the previous stack frame is at [EBP].
By dereferencing EBP, we can walk
the call stack, all the way to the top!

struct stack_frame {
    stack_frame*  previous;
    unsigned long return_address;

std::vector<unsigned long> get_call_stack() {
    std::vector<unsigned long> call_stack;

    stack_frame* current_frame;
    __asm mov current_frame, ebp

    while (!IsBadReadPtr(current_frame, sizeof(stack_frame))) {
        current_frame = current_frame->previous;
    return call_stack;

// Convert the array of addresses to names with the aforementioned MAP file.

Yay, now we know how to grab a stack trace from any location in the
code. This implementation is not robust, but the concepts are
correct: functions have names, functions live in memory, and we can
determine which memory addresses are on the call stack. Now that you
know how to manually grab a call stack, let Microsoft do the heavy
lifting with the StackWalk64

Next time, we’ll talk about setting up your very own Microsoft Symbol Server so you can
grab accurate function names from every version of your software.

Reporting Crashes in IMVU: Call Stacks and Minidumps

So far, we’ve implemented reporting for Python exceptions that bubble
out of the main loop
, C++ exceptions that bubble into Python (and then
out of the main loop), and structured exceptions that bubble into
(and then out of the main loop.) This is a fairly
comprehensive set of failure conditions, but there’s still a big piece
missing from our reporting.

Imagine that you implement this error reporting and have customers try
the new version of your software. You’ll soon have a collection of
crash reports, and one thing will stand out clearly. Without the
context in which crashes happened (call stacks, variable values,
perhaps log files), it’s very hard to determine their cause(s). And
without determining their cause(s), it’s very hard to fix them.

Reporting log files are easy enough. Just attach them to the error
report. You may need to deal with privacy concerns or limit the size
of the log files that get uploaded, but those are straightforward

Because Python has batteries
, grabbing the call stack from a Python exception is
trivial. Just take a quick look at the traceback

Structured exceptions are a little harder. The structure of a call
stack on x86 is machine- and sometimes compiler-dependent.
Fortunately, Microsoft provides an API to dump the relevant process
state to a file such that it can be opened in Visual
or WinDbg,
which will let you view the stack trace and select other data. These
files are called minidumps, and they’re pretty small. Just call MiniDumpWriteDump
with the context of the exception and submit the generated file with your crash

Grabbing a call stack from C++ exceptions is even harder, and maybe
not desired. If you regularly use C++ exceptions for communicating
errors from C++ to Python, it’s probably too expensive to grab a call
stack or write a minidump every single time. However, if you want to
do it anyway, here’s one way.

C++ exceptions are implemented on top of the Windows kernel’s
structured exception machinery. Using the try and
catch statements in your C++ code causes the compiler to
generate SEH code behind the scenes. However, by the time your C++
catch clauses run, the stack has already been unwound. Remember
that SEH has three passes: first it runs filter expressions until it
finds one that can handle the exception; then it unwinds the stack
(destroying any objects allocated on the stack); finally it runs the
actual exception handler. Your C++ exception handler runs as the last stage,
which means the stack has already been unwound, which means you can’t
get an accurate call stack from the exception handler. However, we
can use SEH to grab a call stack at the point where the exception was
thrown, before we handle it…

First, let’s determine the SEH exception code of C++ exceptions
(WARNING, this code is compiler-dependent):

int main() {
    DWORD code;
    __try {
        throw std::exception();
    __except (code = GetExceptionCode(), EXCEPTION_EXECUTE_HANDLER) {
        printf("%X\n", code);

Once we have that, we can write our exception-catching function like

void throw_cpp_exception() {
    throw std::runtime_error("hi");

bool writeMiniDump(const EXCEPTION_POINTERS* ep) {
    // ...
    return true;

void catch_seh_exception() {
    __try {
    __except (
        (CPP_EXCEPTION_CODE == GetExceptionCode()) && writeMiniDump(GetExceptionInformation()),
    ) {

int main() {
    try {
    catch (const std::exception& e) {
        printf("%s\n", e.what());

Now we’ve got call stacks and program state for C++, SEH, and Python
exceptions, which makes fixing reported crashes dramatically easier.

Next time I’ll go into more detail about how C++ stack traces work,
and we’ll see if we can grab them more efficiently.

A brief introduction to modern x86 assembly language

Several people have personally requested that I give a brief
introduction to modern x86 (sometimes called IA32) assembly language.
For simplicity’s sake, I’ll stick with the 32-bit version with a flat
memory model. AMD64
(sometimes called x64) just isn’t as popular as x86 yet
, so this seems safe.

For some reason, there’s a mythos around assembly language. People associate it with bearded gurus, assuming only ninjas can program in it, when, in principle, assembly language
is one of the simplest programming languages there is. Any complexity
stems from a particular architecture’s oddities, and even though x86 is one of the
oddest of them all, I’ll show you that it can be easy to read and write.

First, I’ll describe the basic architecture. When programming in assembly,
there are three main concepts:

Instructions are the individual commands that tell the
computer to perform an operation. These include instructions for
adding, multiplying, comparing, copying, performing bit-wise operations,
accessing memory, and communicating with external devices. The
computer executes instructions sequentially.

Registers are where temporary values go. There is a
small, fixed set of registers available for use. Since there aren’t many registers, nothing stays in
them for very long, as they ar soon needed for other purposes.

Memory is where longer-lived data goes. It’s a
giant, flat array of bytes (8-bit quantities). It’s much slower to
access than registers, but there’s a lot of it.

Before I get into some examples, let me describe the registers
available on x86. There are only 8 general-purpose registers, each of
which is 32 bits wide. They are:

  • EAX
  • EBX
  • ECX
  • EDX
  • ESI
  • EDI
  • EBP – used when accessing local variables or function arguments
  • ESP – used when calling functions

On x86, most instructions have two operands, a destination and a
source. For example, let’s add two and three:

mov eax, 2   ; eax = 2
mov ebx, 3   ; ebx = 3
add eax, ebx ; eax = 2 + 3 = 5

add eax, ebx adds the values in registers eax and ebx, and stores
the result back in eax. (BTW, this is one of the oddities of x86.
Other modern architectures differentiate between destination and
source operands, which would look like add eax, ebx, ecx
meaning eax = ebx + ecx. On x86, the first operand is read and written in the same instruction.)

mov is the data movement instruction. It copies values
from one register to another, or from a constant to a register, or
from memory to a register, or from a register to memory.

Speaking of memory, let’s say we want to add 2 and 3, storing the
result at address 32. Since the result of the addition is 32 bits, the result will
actually use addresses 32, 33, 34, and 35. Remember, memory is
indexed in bytes.

mov eax, 2
mov ebx, 3
add eax, ebx
mov edi, 32
mov [edi], eax ; copies 5 to address 32 in memory

What about loading data from memory? (Reads from memory are called
loads. Writes are called stores.) Let’s write a program that copies
1000 4-byte quantities (4000 bytes) from address 10000 to address

mov esi, 10000 ; by convention, esi is often used as the 'source' pointer
mov edi, 20000 ; similarly, edi often means 'destination' pointer
mov ecx, 1000 ; let's copy 1000 32-bit items

mov eax, [esi] ; load from source
mov [edi], eax ; store to destination
add esi, 4
add edi, 4

sub ecx, 1 ; ecx -= 1
cmp ecx, 0 ; is ecx 0?

; if ecx does not equal 0, jump to the beginning of the loop
jne begin_loop
; otherwise, we're done

This is how the C memcpy function works. Not so bad, is
it? For reference, this is what our x86 code would look like in C:

int* src = (int*)10000;
int* dest = (int*)20000;
int count = 1000;
while (count--) {
    *dest++ = *src++;

From here, all it takes is a good instruction
, some memorization, and a bit of practice. x86 is full
of arcane details (it’s 30 years old!), but once you’ve got the basic
concepts down, you can mostly ignore them. I hope I’ve shown you that writing x86
is easy. Perhaps more importantly, I hope you won’t be intimidated the next time Visual Studio
shows you the assembly for your program. Understanding how the machine is executing your code
can be invaluable when debugging.

Latency vs. Throughput

This is my last post about processors and performance, I swear! Plus,
my wrists are starting to hurt from this bloodpact thing (as I’m
diagnosed with RSI), so I think this will be a light one.

As I’ve discussed previously,
modern desktop processors work really hard to exploit the inherent
parallelism in your programs. This is called instruction-level
, and is one of the techniques processors use to get
more performance out of slower clock rates (along with data-level
parallelism (SIMD) or multiple cores (MIMD)*). Previously, I waved my
hands a bit and said “The processor makes independent operations run
in parallel.” Now I’m going to teach you how to count cycles in the presence of latency and parallelism.

Traditionally, when analyzing the cost of an algorithm, you would
simply count the operations involved, sum their costs in cycles, and
call it a day. These days, it’s not that easy. Instructions have two
costs: dependency chain latency and reciprocal throughput.

Reciprocal throughput is simply the reciprocal of the maximum
throughput of a particular instruction. Throughput is measured in
instructions/cycle, so reciprocal throughput is cycles/instruction.

OK, that sounds like the way we’ve always measured performance. So
what’s dependency chain latency? When the results of a previous
calculation are needed for another calculation, you have a dependency
chain. In a dependency chain, you measure the cost of an instruction
by its latency, not its reciprocal throughput. Remember that our
processors are working really hard to exploit parallelism in our code.
When there is no instruction-level parallelism available, we get

Let’s go back to our sum 10000 numbers example, but unroll it a bit:

float array[10000];
float sum = 0.0f;
for (int i = 0; i < 10000; i += 8) {
    sum += array[i+0];
    sum += array[i+1];
    sum += array[i+2];
    sum += array[i+3];
    sum += array[i+4];
    sum += array[i+5];
    sum += array[i+6];
    sum += array[i+7];
return sum;

In x86:

xor ecx, ecx     ; ecx  = i   = 0
mov esi, array
xorps xmm0, xmm0 ; xmm0 = sum = 0.0

addss xmm0, [esi+0]
addss xmm0, [esi+4]
addss xmm0, [esi+8]
addss xmm0, [esi+12]
addss xmm0, [esi+16]
addss xmm0, [esi+20]
addss xmm0, [esi+24]
addss xmm0, [esi+28]

add esi, 32
add ecx, 1
cmp ecx, 10000
jl begin ; if ecx < 10000, goto begin

; xmm0 = total sum

Since each addition to sum in the loop depends on the previous
addition, these instructions are a dependency chain. On a modern
processor, let’s say the reciprocal throughput of addss is 1 cycle.
However, the minimum latency is 4 cycles. Since every instruction
depends on the previous, each addition costs 4 cycles.

As before, let’s try summing with four temporary sums:

xor ecx, ecx     ; ecx  = i    = 0
mov esi, array
xorps xmm0, xmm0 ; xmm0 = sum1 = 0.0
xorps xmm1, xmm1 ; xmm1 = sum2 = 0.0
xorps xmm2, xmm2 ; xmm2 = sum3 = 0.0
xorps xmm3, xmm3 ; xmm3 = sum4 = 0.0

; top = sum0

addss xmm0, [esi+0]
addss xmm1, [esi+4]
addss xmm2, [esi+8]
addss xmm3, [esi+12]
addss xmm0, [esi+16]
addss xmm1, [esi+20]
addss xmm2, [esi+24]
addss xmm3, [esi+28]

add esi, 32
add ecx, 1
cmp ecx, 10000
jl begin ; if ecx < 10000, goto begin

; accumulate sums
addss xmm0, xmm1
addss xmm2, xmm3 ; this instruction happens in parallel with the one above
addss xmm0, xmm2

Here, the additions in the loop that depend on each other are 4 cycles apart,
meaning the minimum latency is no longer a problem. This lets us hit
the maximum addition rate of one per cycle.

Removing dependency chains is a critical part of optimizing on today’s
processors. The Core 2 processor has six execution units,
three of which are fully 128-bit SIMD ALUs. If you can restructure
your algorithm so calculations happen independently, you can take
advantage of all of them. (And if you can pull off making full use of
the Core 2’s ALU capacity, you win.)

* BTW, it’s sort of unrelated, but I couldn’t help but link this article.
Greg Pfister has an interesting comparison and history of SIMD
vs. MIMD here. Ignore the terminology blathering and focus on the history of and influences on SIMD and MIMD over time.

A Simple Introduction to Superscalar, Out-of-Order Processors

Since the Pentium Pro/Pentium 2, we have all been using heavily superscalar, out-of-order processors. I’d heard these terms a million times, but didn’t know what they meant until I read The Pentium Chronicles: The People, Passion, and Politics Behind Intel’s Landmark Chips (Practitioners). (BTW, if you love processors, the history of technology, and the fascinating dynamics at a company like Intel, that book is fantastic.)

Superscalar basically means “greater than 1”, implying that a superscalar processor can run code faster than its clock speed would suggest. Indeed, a 3 GHz Pentium 4 can retire 4 independent integer additions per clock cycle, which is 12 billion integer additions per second!

Out-of-order means just that – the processor looks at your code at runtime and executes it in parallel if it can. For example, imagine this code:

// three independent, non-null pointers
int* p; int* q; int* r;
const int flag1, flag2, flag3;

if (*p & flag1) {
    if (*q & flag2) {
        if (*r & flag3) {

The processor can’t assume that q is a valid pointer until it checks p, and the same for r and q. Accessing main memory costs ~200 cycles, so if none of the pointers point to cached memory, you just spent 600 cycles determining whether to do_work(). This is called a “dependency chain”, where the result of a later calculation depends on the previous. But what if you know that p, q, and r will all be valid pointers? You can rewrite as:

const int x = *p;
const int y = *q;
const int z = *r;
if ((x & flag1) && (y & flag2) && (z & flag3)) {

Now, the processor knows that all of those memory fetches are independent, so it runs them in parallel. Then, it runs the ANDs in parallel too, since they’re independent. Your 600-cycle check just became 200 cycles.

Similarly, let’s say you want to add 10,000 numbers.

int sum = 0;
for (int i = 0; i < 10000; ++i) {
    sum += array[i];
return sum;

Let’s assume the loop overhead and memory access is free, and each addition takes one cycle. Since each addition depends on the previous value of sum, they must be executed serially, taking 10000 cycles. However, you know that addition is associative, you can sum with two variables:

int sum1 = 0;
int sum2 = 0;
for (int i = 0; i < 10000; i += 2) {
    sum1 += array[i];
    sum2 += array[i+1];
return sum1 + sum2;

Now you have two independent additions, which can be executed in parallel! The loop takes 5000 cycles now. If you independently sum in sum1, sum2, sum3, and sum4, the loop will take 2500 cycles. And so on, until you’ve hit the IPC (instructions per cycle) limit on your processor. If you’re making effective use of your SIMD units, you’d be surprised at how much work you can do in parallel…

And that’s what an out-of-order, superscalar processor can do for you!

The Real Benefit of Inlining Functions (or: Floating Point Calling Conventions)

My mental model for the performance benefit of inlining a function call was:

  1. code size increases
  2. the overhead of the call, including argument and return value marshalling, is eliminated
  3. the compiler knows more information, so it can generate better code

I had dramatically underestimated the value of #3, so this entry is an attempt to give a concrete example of how inlining can help.

As alluded in my previous entry, you can’t just leave the floating point state willy nilly across function calls. Every function should be able to make full use of the floating point register stack, which doesn’t work if somebody has left stale values on it. In general, these rules are called calling conventions. Agner Fog has excellent coverage of the topic, as usual.

Anyway, back to inlining. The specifics aren’t that important, but we had a really simple function in the IMVU client which continued to show up in the profiles. It looked something like this:

std::vector<float> array;

float function() {
    float sum = 0.0f;
    for (size_t i = 0; i < array.size(); ++i) {
        sum += array[i];
    return sum;

This function never operated on very large lists, and it also wasn’t called very often, so why was it consistently in the profiles? A peek at the assembly showed (again, something like):

fstp dword ptr [sum] ; sum = 0.0

xor ecx, ecx ; i = 0
jmp cmp


push ecx
call array.operator[]

fadd [sum] ; return value of operator[] in ST(0)
fstp [sum] ; why the load and the store??

add ecx, 1


call array.size()
cmp ecx, eax
jb loop ; continue if i < return value

fld [sum] ; return value

First of all, why all of the function calls? Shouldn't std::vector be inlined? But more importantly, why does the compiler keep spilling sum out to the stack? Surely it could keep the sum in a floating point register for the entire calculation.

This is when I realized: due to the calling convention requirements on function calls, the floating point stack must be empty upon entry into the function. The stack is in L1 cache, but still, that's three cycles per access, plus a bunch of pointless load and store uops.

Now, I actually know why std::vector isn't inlined. For faster bug detection, we compile and ship with bounds checking enabled on STL containers and iterators. But in this particular situation, the bounds checking isn't helpful, since we're iterating over the entire container. I rewrote the function as:

std::vector<float> array;

float function() {
    const float* p = &array[0];
    size_t count = array.size();
    float sum = 0.0f;
    while (count--) {
        sum += *p++;
    return sum;

And the compiler generated the much more reasonable:

call array.size()
mov ecx, eax ; ecx = count

push 0
call array.operator[]
mov esi, eax ; esi = p

fldz ; ST(0) = sum

jmp cmp

fadd [esi] ; sum += *p

add esi, 4 ; p++
sub ecx, 1 ; count--

cmp ecx, 0
jne loop

; return ST(0)

This is the real benefit of inlining. Modern compilers are awesome at making nearly-optimal use of the CPU, but only when they have enough information. Inlining functions gives them that information.

p.s. I apologize if my pseudo-assembly had mistakes. I wrote it from memory.

#IND and #QNaN with /fp:fast

The other day Timothy and I were optimizing some floating-point-intensive lighting code. Looking at the generated code, I realized we weren’t compiling with /fp:fast. Due to the wonky state of floating point on 32-bit x86, Visual C++ frequently stores temporary results of floating point calculations to the stack and then reloads them, for the sake of consistent results.

See, the problem is that the floating point registers on x86 are 80 bits wide, so if you compile “float x, y, z, w; w = (x + y) * z” as…

fld [x]  ; ST0 = x
fadd [y] ; ST0 = x + y
fmul [z] ; ST0 = (x + y) * z
fstp [w] ; w = (x + y) * z

… the temporary results are always stored in ST0 with 80 bits of precision. However, since floats only have 32 bits of precision, you can wind up with different results depending on compilers, optimization settings, register allocation, etc. We often had problems like this at VRAC. Some poor engineering student would send out a panicked e-mail at 9:00 p.m. asking why his program started producing different results in release mode than it did in debug mode.

Thus, Visual C++ takes a more cautious approach. By default, it stores float intermediates back to memory to truncate them to 32 bits of precision:

fld [x]
fadd [y]
fstp [temp] ; truncate precision
fld [temp]
fmul [z]
fstp [w]

Tiny differences in precision don’t matter in IMVU, so enabling /fp:fast saved 50-100 CPU cycles per vertex in our vertex lighting loop. However, with this option turned on, our automated tests started failing with crazy #IND and #QNAN errors!

After some investigation, we discovered that our 4×4 matrix inversion routine (which calculates several 2×2 and 3×3 determinants) was using all 8 floating point registers with /fp:fast enabled. The x87 registers are stored in a “stack“, where ST0 is the top of the stack and STi is the i’th entry. Load operations like fld, fld1, and fldz push entries on the stack. Arithmetic operations like fadd and fmul operate on the top of the stack with the value in memory, storing the result back on the stack.

But what if the x87 register stack overflows? In this case, an “indefinite” NAN is loaded instead of the value you requested, indicating that you have lost information. (The data at the bottom of the stack was lost.) Here’s an example:

fldz  ; ST0 = 0
fld1  ; ST0 = 1, ST1 = 0
fldpi ; ST0 = pi, ST1 = 1, ST2 = 0
fldz  ; ST0-4 = 0, ST5 = pi, ST6 = 1, ST7 = 0
fldz  ; ST0 = IND!

Woops, there’s a bug in your code! You shouldn’t overflow the x87 register stack, so the processor has given you IND.

Indeed, this is what happened in our matrix inversion routine. But why?

Using a debugger, we determined that the x87 stack contained one value at the start of the function. Moreover, it contained a value at the start of the test! Something was fishy. Somebody was leaving the x87 stack dirty, and we needed to find out who.

void verify_x87_stack_empty() {
    unsigned z[8];
    __asm {
        fstp dword ptr [z+0x00]
        fstp dword ptr [z+0x04]
        fstp dword ptr [z+0x08]
        fstp dword ptr [z+0x0c]
        fstp dword ptr [z+0x10]
        fstp dword ptr [z+0x14]
        fstp dword ptr [z+0x18]
        fstp dword ptr [z+0x1c]

    // Verify bit patterns. 0 = 0.0
    for (unsigned i = 0; i < 8; ++i) {
        CHECK_EQUAL(z[i], 0);

The previous function, called before and after every test, discovered the culprit: we had a test that intentionally called printf() and frexp() with NaN values, which had the side effect of leaving the floating point stack in an unpredictable state.

Adding __asm emms to the end of the test fixed our problem: thereafter, /fp:fast worked wonderfully. Case closed.