IMVU Crash Reporting: Stalls and Deadlocks

By mid-2006, we’d primarily focused on access violations and unhandled exceptions, the explosive application failures. After extensive effort, we got our client’s crash rate down to 2% or so, where 2% of all sessions ended in a crash.* Still the customers cried “Fix the crashes!”

It turns out that when a customer says “crash” they mean “it stopped doing what I wanted”, but engineers hear “the program threw an exception or caused an access violation”. Thus, to the customer, crash can mean:

  • the application was unresponsive for a period of time
  • the UI failed to load, making the client unusable
  • the application has been disconnected from the server

In short, any time the customer cannot make progress and it’s not (perceived to be) their fault, the application has crashed.

OK, we’ve got our work cut out for us… Let’s start by considering deadlocks and stalls.

First, some terminology: in computer science, a deadlock is a situation where two threads or processes are waiting for each other, so neither makes progress. That definition is a bit academic for our purposes. Let’s redefine deadlock as any situation where the program becomes unresponsive for an unreasonable length of time. This definition includes livelock, slow operations without progress indication, and network (or disk!) I/O that blocks the program from responding to input.

It actually doesn’t matter whether the program will eventually respond to input. People get impatient quickly. You’ve only got a few seconds to respond to the customer’s commands.

Detecting Deadlocks in C++

The embedded programming world has a “watchdog timer” concept. Your program is responsible for periodically pinging the watchdog, and if for several seconds you don’t, the watchdog restarts your program and reports debugging information.

Implementing this in C++ is straightforward:

  • Start a watchdog thread that wakes up every few seconds to check that the program is still responding to events.
  • Add a heartbeat to your main event loop that frequently pings the watchdog.
  • If the watchdog timer detects the program is unresponsive, record stack traces and log files, then report the failure.

IMVU’s CallStack API allows us to grab the C++ call stack of an arbitrary thread, so, if the main thread is unresponsive, we report its current stack every couple of seconds. This is often all that’s needed to find and fix the deadlock.

Detecting Deadlocks in Python

In Python, we can take the same approach as above:

  1. Start a watchdog thread.
  2. Ping the Python watchdog thread in your main loop.
  3. If the watchdog detects that you’re unresponsive, record the main thread’s Python stack (this time with sys._current_frames) and report it.

Python’s global interpreter lock (GIL) can throw a wrench in this plan. If one thread enters an infinite loop while keeping the GIL held (say, in a native extension), the watchdog thread will never wake and so cannot report a deadlock. In practice, this isn’t a problem, because the C++ deadlock detector will notice and report a deadlock. Plus, most common deadlocks are caused by calls that release the GIL: threading.Lock.acquire,,, and so on.

It might help to think of the Python deadlock detector as a fallback that, if successful, adds richer information to your deadlock reports. If it failed, whatever. The C++ deadlock detector is probably enough to diagnose and fix the problem.

What did we learn?

It turned out the IMVU client had several bugs where we blocked the main thread on the network, sometimes for up to 30 seconds. By that point, most users just clicked the close box [X] and terminated the process. Oops.

In addition, the deadlock detectors pointed out places where we were doing too much work in between message pumps. For example, loading some assets into the 3D scene might nominally take 200ms. On a computer with 256 MB of RAM, though, the system might start thrashing and loading the same assets would take 5s and report as a “deadlock”. The solution was to reducing the program’s working set and bite off smaller chunks of work in between pumps.

I don’t recall seeing many “computer science” deadlocks, but these watchdogs were invaluable in tracking down important failure conditions in the IMVU client.

Next time, we’ll improve the accuracy of our crash metrics and answer the question “How do you know your metrics are valid?”

* Median session length is a more useful reliability metric. It’s possible to fix crashes and see no change in your percentage of failed sessions, if fixing crashes simply causes sessions to become longer.

Reporting Crashes in IMVU: Who threw that C++ exception?

It’s not often that I get to write about recent work. Most of the techniques in this series were implemented at IMVU years ago. A few weeks ago, however, a common C++ exception (tr1::bad_weak_ptr) starting intermittently causing crashes in the wild. This exception can be thrown in a variety of circumstances, so we had no clue which code was problematic.

We could have modified tr1::bad_weak_ptr so its constructor fetched a CallStack and returned it from tr1::bad_weak_ptr::what(), but fetching a CallStack is not terribly cheap, especially in such a frequently-thrown-and-caught exception. Ideally, we’d only grab a stack after we’ve determined it’s a crash (in the top-level crash handler).

Allow me to illustrate:

void main_function(/*arguments*/) {
    try {
        try {
            // We don't want to grab the call stack here, because
            // we'll catch the exception soon.
        catch (const std::exception& e) {
            // Yup, exception is fine.  Just swallow and
            // do something else.
    catch (const std::exception& e) {
        // Oh no! fallback_algorithm() failed.
        // Grab a stack trace now.

Almost! Unfortunately, the call stack generated in the catch clause doesn’t contain fallback_algorithm. It starts with main_function, because the stack has already been unwound by the time the catch clause runs.

Remember the structure of the stack:

Example Stack

We can use the ebp register, which points to the current stack frame, to walk and record the current call stack. [ebp+4] is the caller’s address, [[ebp]+4] is the caller’s caller, [[[ebp]]+4] is the caller’s caller’s caller, and so on.

What can we do with this information? Slava Oks at Microsoft gives the clues we need. When you type throw MyException(), a temporary MyException object is constructed at the bottom of the stack and passed into the catch clause by reference or by value (as a copy deeper on the stack).

Before the catch clause runs, objects on the stack between the thrower and the catcher are destructed, and ebp is pointed at the catcher’s stack frame (so the catch clause can access parameters and local variables).

From within the outer catch block, here is the stack, ebp, and esp:

Stack From Catch Clause

Notice that, every time an exception is caught the linked list of stack frames is truncated. When an exception is caught, ebp is reset to the stack frame of the catcher, destroying our link to the thrower’s stack.

But there’s useful information between ebp and esp! We just need to search for it. We can find who threw the exception with this simple algorithm:

	For every possible pointer between ebp and esp,
	find the deepest pointer p,
	where p might be a frame pointer.
	(That is, where walking p eventually leads to ebp.)

Or you can just use our implementation.

With this in mind, let’s rewrite our example’s error handling:

void main_function(/*arguments*/) {
    try {
        try {
        catch (const std::exception& e) {
            // that's okay, just swallow and
            // do something else.
    catch (const std::exception& e) {
        // oh no! fallback_algorithm() failed.
        // grab a stack trace - including thrower!
        Context ctx;
        ctx.ebp = findDeepestFrame(ctx.ebp, ctx.esp);

Bingo, fallback_algorithm appears in the stack:


Now we’ll have no problems finding the source of C++ exceptions!

IMVU’s CallStack API Now Open Source!

I’m proud to announce that IMVU has open-sourced its C++ CallStack API! It’s available under the MIT license at our SourceForge project. You can view the code here.

CallStack is a simple API for recording and displaying C++ call stacks on 32-bit Windows. To display the call stack at the current location:

printf("%s\n", CallStack::here().asString().c_str());

To grab a CallStack from an arbitrary thread:

HANDLE other_thread_handle = ...;
CallStack other_thread(other_thread_handle);

From a structured exception:

Context ctx;
CallStack cs;
__try {
	// broken code
__except (
	ctx = *(GetExceptionInformation())->ContextRecord),
) {
	// display cs.asString()

At first, the format of CallStack.asString() is a bit confusing, but with your symbol server it contains everything necessary to generate a symbolic call stack, including file names and line numbers.

Here is an example CallStack.asString() result:


The lines are formatted module_name#module_hash!offset. module_name is the name of the DLL or EXE in which the function lives. module_hash is a unique hash that identifies a build of a particular module. offset is the offset of the line of code in bytes from the start of the module. With this information, you can look up a function name and line number for each entry in a call stack.

Fortunately, we have a tool that automates this process:! Running it with the previous call stack on the clipboard produces this output:

	...t\python-2.5.1-src\objects\abstract.c (1860): PyObject_Call
	...0\libs\python\src\object\function.cpp ( 614): function_call
	...\boost\function\function_template.hpp ( 132): boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool,boost::python::detail::translate_exception<IMVUError,void (__cdecl*)(IMVUError const &)>,boost::_bi::list3<boost::arg<1>,boost::arg<2>,boost::_bi::value<void (__cdecl*)(IMVUError const

That last function name is pretty epic (as are most Boost or C++ function names), but notice that the call stack has accurate file names and line numbers.

The astute reader might ask “Don’t minidumps contain stack traces too?” The answer is yes, but minidumps are often inconvenient. Consider the common case:

  1. Open crash report
  2. Download mini.dmp to the desktop
  3. Open mini.dmp in Visual Studio
  4. Press F11
  5. Open the call stack debug window if it’s not open

With CallStack, we can shorten that to

  1. Open crash report
  2. Copy the call stack
  3. Run

Also, for reasons I don’t understand, sometimes Visual Studio fails to produce an informative stack when CallStack succeeds.

CallStack is a handy tool for debugging crashes from the wild, and I’m happy that we were able to make it available.

IMVU Crash Reporting: Plugging the VC++ Runtime’s Escape Hatches

Modern software is built on top of piles of third-party components. The holy grail of reusable code has manifested in the form of an open source ecosystem. That said, in a startup, you rarely have time to audit the third-party components you use. These third-party components might be ridiculous enough to call abort() on error. It may sound scary, but with a fixed amount of work, you can turn these calls into reported structured exceptions.

Unfortunately, the Visual C++ Runtime provides several functions that abnormally terminate the process without running our crash handlers. As a bonus, they usually include user-incomprehensible dialog boxes. Let’s see:

  • abort()
  • “pure virtual” method calls
  • throwing an exception from a destructor during unwind
  • stack buffer overflow (with /GS enabled)
  • standard C++ library index/iterator error with checked iterators enabled

Since you can never prove that you’ve implemented crash reporting for every way a third-party component can bypass your crash reporting, I’m just going to cover the ones we’ve implemented:


Result of calling abort()

Turning abort() into a structured exception is pretty straightforward. A quick read of the CRT source shows that abort() runs SIGABRT’s installed signal handler. It’s easy enough to install a custom handler that raises a structured exception:

void __cdecl onSignalAbort(int code) {
    // It's possible that this signal handler will get called twice
    // in a single execution of the application.  (On multiple threads,
    // for example.)  Since raise() resets the signal handler, put it back.
    signal(SIGABRT, onSignalAbort);

    RaiseException(EXC_SIGNAL_ABORT, 0, 0, 0);


// at program start:
signal(SIGABRT, onSignalAbort);

“Pure Virtual” Method Calls

Pure virtual function call
Pure virtual function call

Ever see a program fail with that useless “pure virtual function call” error message? This happens when a base class’s constructor tries to call a pure virtual method implemented by a derived class. Since base class constructors run before derived class constructors, the compiler fills the vtable for the derived class with references to _purecall, a function normally defined by the CRT. _purecall() aborts the process, sidestepping our crash reporting. Code might better elucidate this situation:

struct Base;
void foo(Base* b);

struct Base {
    Base() {
    virtual void pure() = 0;
struct Derived : public Base {
    void pure() { }

void foo(Base* b) {

Derived d; // boom

The fix is simple: just define a _purecall that shadows the CRT implementation:

int __cdecl _purecall() {
    RaiseException(EXC_PURE_CALL, 0, 0, 0);
    return 0;

Throwing an Exception from a Destructor During Unwind

C++ is aggressive about making sure you don’t throw an exception while another exception is in the air. If you do, its default behavior is to terminate your process. From MSDN: If a matching handler is still not found, or if an exception occurs while unwinding, but before the handler gets control, the predefined run-time function terminate is called. If an exception occurs after throwing the exception, but before the unwind begins, terminate is called.

To convert calls to terminate() (and unexpected(), for completeness) into structured exceptions, override the terminate handler with set_terminate (and set_unexpected):

void onTerminate() {
    RaiseException(EXC_TERMINATE, 0, 0, 0);
void onUnexpected() {
    RaiseException(EXC_UNEXPECTED, 0, 0, 0);

// at program start:

Standard C++ Library Index/Iterator Error with Checked Iterators Enabled

The IMVU client is compiled with _SECURE_SCL enabled. Increased reliability from knowing exactly where failures occur is more important than the very minor performance hit of validating all iterator accesses.

There are two ways to convert invalid iterator uses into reported exceptions. The easiest is compiling with _SECURE_SCL_THROWS=1. Otherwise, just install your own invalid_parameter handler with _set_invalid_parameter_handler.

Stack Buffer Overflow (with /GS enabled)

By default, Visual C++ generates code that detects and reports stack buffer overruns. This prevents a common class of application security holes. Unfortunately, the stock implementation of this feature does not allow you to install your own handler, which means you can’t report any buffer overruns.

Again, we can shadow a CRT function to handle these failures. From C:\Program Files\Microsoft Visual Studio 8\VC\crt\src\gs_report.c, copy the __report_gsfailure function into your application. (You did install the CRT source, didn’t you?) Instead of calling UnhandledExceptionFilter at the bottom of __report_gsfailure, call your own last-chance handler or write a minidump.

Testing Crash Reporting

Writing tests for the above reporting mechanisms is super fun. Take everything you’re told not to do and implement it. I recommend adding these crashes to your UI somewhere so you can directly observe what happens when they occur in your application. Here is our crash menu:

Crash Menu
Crash Menu

These types of failures are rare, but when we hit them, I’m glad we implemented this reporting.

You Won’t Learn This in School: Disabling Kernel Functions in Your Process

Detecting and reporting unhandled exceptions with SetUnhandledExceptionFilter seemed logical, and, in fact, it worked… for a while. Eventually, we started to notice failures that should have been reported as a last-chance exception but weren’t. After much investigation, we discovered that both Direct3D and Flash were installing their own unhandled exception filters! Worse, they were fighting over it, installing their handlers several times per second! In practice, this meant our last-chance crash reports were rarely generated, convincing us our crash metrics were better than they were. (Bad, bad libraries!)

It’s pretty ridiculous that we had to solve this problem, but, as Avery Lee says, “Just because it is not your fault does not mean it is not your problem.”

The obvious solution is to join the fray, calling SetUnhandledExceptionFilter every frame, right? How about we try something a bit more reliable… I hate implementing solutions that have obvious flaws. Thus, we chose to disable (with code modification) the SetUnhandledExceptionFilter function immediately after installing our own handler. When Direct3D and Flash try to call it, their requests will be ignored, leaving our exception handler installed.

Code modification… isn’t that scary? With a bit of knowledge and defensive programming, it’s not that bad. In fact, I’ll show you the code up front:

// If this doesn't make sense, skip the code and come back!

void lockUnhandledExceptionFilter() {
    HMODULE kernel32 = LoadLibraryA("kernel32.dll");

    if (FARPROC gpaSetUnhandledExceptionFilter = GetProcAddress(kernel32, "SetUnhandledExceptionFilter")) {
        unsigned char expected_code[] = {
            0x8B, 0xFF, // mov edi,edi
            0x55,       // push ebp
            0x8B, 0xEC, // mov ebp,esp

        // only replace code we expect
        if (memcmp(expected_code, gpaSetUnhandledExceptionFilter, sizeof(expected_code)) == 0) {
            unsigned char new_code[] = {
                0x33, 0xC0,       // xor eax,eax
                0xC2, 0x04, 0x00, // ret 4

            BOOST_STATIC_ASSERT(sizeof(expected_code) == sizeof(new_code));

            DWORD old_protect;
            if (VirtualProtect(gpaSetUnhandledExceptionFilter, sizeof(new_code), PAGE_EXECUTE_READWRITE, &old_protect)) {
                CopyMemory(gpaSetUnhandledExceptionFilter, new_code, sizeof(new_code));

                DWORD dummy;
                VirtualProtect(gpaSetUnhandledExceptionFilter, sizeof(new_code), old_protect, &dummy);

                FlushInstructionCache(GetCurrentProcess(), gpaSetUnhandledExceptionFilter, sizeof(new_code));

If that’s obvious to you, then great: We’re hiring!

Otherwise, here is an overview:

Use GetProcAddress to grab the real address of SetUnhandledExceptionFilter. (If you just type &SetUnhandledExceptionFilter you’ll get the relocatable import thunk, not the actual SetUnhandledExceptionFilter function.)

Most Windows functions begin with five bytes of prologue:

mov edi, edi ; 2 bytes for hotpatching support
push ebp     ; stack frame
mov ebp, esp ; stack frame (con't)

We want to replace those five bytes with return 0;. Remember that __stdcall functions return values in the eax register. We want to replace the above code with:

xor eax, eax ; eax = 0
ret 4        ; pops 4 bytes (arg) and returns

Also five bytes! How convenient! Before we replace the prologue, we verify that the first five bytes match our expectations. (If not, we can’t feel comfortable about the effects of the code replacement.) The VirtualProtect and FlushInstructionCache calls are standard fare for code modification.

After implementing this, it’s worth stepping through the assembly in a debugger to verify that SetUnhandledExceptionFilter no longer has any effect. (If you really enjoy writing unit tests, it’s definitely possible to unit test the desired behavior. I’ll leave that as an exercise for the reader.)

Finally, our last-chance exception reporting actually works!

Reporting Crashes in IMVU: Last-Chance Exceptions

So far, our crash reporting is looking pretty comprehensive. But what if there is a crash in the crash reporting itself? Or perhaps a crash on another thread (outside of any __try ... __except blocks)? Or, heaven forbid, we somehow cause Python itself to crash? In all of these situations, we can’t count on our Python crash reporting code to handle the error.

There are a couple ways to report these failures, and IMVU chose to implement a last-chance exception handler with SetUnhandledExceptionFilter. This handler runs whenever a structured exception bubbles out of any thread in the process. Unfortunately, you can’t run arbitrary code in the handler – maybe your process’s heap is corrupted and further attempts to allocate will cause another access violation.

By the way, the last-chance handler is your opportunity to run code just before the Windows “this program has performed an illegal operation” dialog.

Access Violation

In IMVU’s last-chance handler, we do what we can to save the state of the failure to disk. The next time the client starts, if failure data exists, the client submits it to the server. (Assuming the customer tries to run the client again.) Here is is the last-chance handler’s implementation:

LONG WINAPI handleLastChanceException(
) {
	DWORD error = writeMiniDump(
	if (error) {
		CRASHLOG("Failed to write minidump");

	// Try to record some additional system
	// info if we can.

	// Show the application error dialog.

Pretty easy! Again, try to restrict yourself to system calls in your last-chance handler.

When we first implemented this, we found out that the IMVU client was crashing a ton and we didn’t even know! Were I to start over from scratch, I’d implement the broadest crash detection possible first, and then implement narrower, more-detailed detection as necessary.

Next time I’ll talk about an unexpected shortcoming of this implementation. (Can you guess what it is?)

Reporting Crashes in IMVU: Creating Your Very Own Symbol Server

With minidumps or a bit of hand-rolled code, it’s pretty easy to report symbolic C++ stack traces whenever your application crashes. But reporting is just one side of the coin. Once you begin collecting crash reports, you’ll need a way to read them. As I mentioned before, you can generate some MAP files and look up your functions manually, but Microsoft provides some lesser-known tools that take care of this for you.

Why should I run my own symbol server?

  • Once you create a symbol server, you can easily debug a release build of your program from any computer with network access to the symbol server.
  • Existing tools such as WinDbg and Visual Studio play nicely with symbol servers.
  • Creating a symbol server is easy: it requires a single download and a directory on a network share.
  • Symbol servers can be cascaded. Microsoft runs a symbol server for their operating system DLLs, so entries from both your code and system DLLs will be displayed in a single call stack.

An example of a call stack containing IMVU code and Windows code.

What is a symbol server?

If you compile your program with the correct options, the compiler and linker will generate symbol files (PDBs). PDBs contain information required to parse stack traces and locate identifiers in your program. A PDB also contains the signature of the DLL or EXE associated with it. A symbol server is a directory of PDBs, organized by signature. Thus, given a DLL or EXE, you can find its PDB signature (the PdbSig70 and PdbAge fields in the corresponding debug PE header) and look up the PDB in
the symbol server directory.

Creating the symbol server

Download and install the Debugging
Tools for Windows
. Make sure symstore.exe is in your path.

Create a directory on an accessible network share. Note the full path; you’ll need it later. Ours is \\hydra\devel\SymbolServer.

Every time you release your software, add the generated PDBs to the symbol server. Our release script runs this command:

symstore.exe add /s \\hydra\devel\SymbolServer /compress /r /f *.pdb /t IMVUClient

This command takes a while to run, but it simply searches for PDBs and indexes them in the SymbolServer directory by their signature so Visual Studio and WinDbg can find them later.

Using the symbol server

There are generally two ways to coerce your debugger to resolve call stacks with your symbol server. You can either set the _NT_SYMBOL_PATH environment variable or configure the debugger with the paths directly.

The syntax for _NT_SYMBOL_PATH is a bit wonky, but it should look something like this:


_NT_SYMBOL_PATH behaves just like any other PATH: paths are searched in order until the appropriate PDB is found. To make future lookups faster, if a PDB is found on the network, it will be copied to the local path.

The last entry in the list is Microsoft’s public symbol server. This allows the debugger to find function names for Microsoft operating system DLLs.

Each debugger generally has its own mechanism for configuring symbol servers. Here’s Visual Studio 2005’s symbol path editor:

MSDev Symbol Path Configuration

To save time, you may prefer using _NT_SYMBOL_PATH over per-debugger configuration, since it will work with all debuggers.

I hope this information was useful. I certainly wish we’d discovered it years ago. Happy debugging!

Handling exceptions from XULRunner callbacks

(I was composing an e-mail to IMVU’s engineering team when I realized this information was generally applicable to anyone embedding XULRunner into their application. Hope it’s useful.)

XULRunner is written in a subset of C++ that we’ll call XPCOM. An embedded XULRunner window communicates back to the host application through XPCOM interfaces that the host implements. In IMVU, we generally use C++ exceptions to signal failure. On the other hand, XPCOM uses nsresult error codes. Specifically, XULRunner is not written to support C++ exceptions, nor is it compiled with them enabled. (Note that compiling with exceptions enabled is not sufficient to guarantee defined behavior when they’re thrown. You must use techniques like RAII to properly unwind and restore state if an exception is thrown.)

If XULRunner is calling into our code, and our code uses exceptions to signal failure, and throwing an exception through a XULRunner call stack results in undefined behavior, what do we do? This is the strategy I took:

In every method of every IMVU implementation of an XPCOM interface, I bracketed the function body with IMVU_BEGIN_DISALLOW_EXCEPTIONS_XPCOM and IMVU_END_DISALLOW_EXCEPTIONS_XPCOM. For example:

nsresult xpcom_method_runtime_error_with_error_info() {

These two macros generate a try ... catch clause that handles every C++ exception thrown from the body, returning NS_ERROR_UNEXPECTED to the caller.

If the exception thrown is a Python error (boost::python::error_already_set), then the Python exception is pulled (PyErr_Fetch) and scheduled to be reraised (PyErr_Restore) in the next iteration through the IMVU client’s message loop.

If the exception thrown is a C++ exception, we’d like to take the same approach. However, C++0x has not shipped, so there’s no built-in mechanism for transferring exceptions across contexts. Thus, we take advantage of the boost::exception framework to copy and rethrow the exception from the main message loop. Unfortunately, you can’t just “throw X()”. You have to use boost::throw_exception, which enables the machinery for current_exception() and rethrow_exception(). To enforce this requirement, I have modified our C++ exception hierarchy so that you must use X::throw_(arguments) instead of throw X(arguments).

If the exception thrown is a C++ exception but not a subclass of std::exception, then we catch it with catch (...) or std::uncaught_exception() in a sentry object’s destructor, and raise a structured exception to at least indicate that this is occurring in the field.

For reference, here is the implementation:

void handlePythonError();
void handleStandardException(const std::exception& e);

    DisallowExceptionsSentry PP_UNIQUE_NAME(); \

#define IMVU_END_DISALLOW_EXCEPTIONS(block)                             \
    catch (const boost::python::error_already_set&) {               \
        handlePythonError();                                            \
        block ;                                                         \
    }                                                                   \
    catch (const std::exception& e) {                               \
        handleStandardException(e);                                     \
        block ;                                                         \


#define IMVU_DISALLOW_EXCEPTIONS_XPCOM(block)                           \
    IMVU_BEGIN_DISALLOW_EXCEPTIONS_XPCOM {                              \
        block ;                                                         \

And the source file:

DisallowExceptionsSentry::~DisallowExceptionsSentry() {
    if (std::uncaught_exception()) {
        RaiseException(EXC_EXCEPTIONS_NOT_ALLOWED, 0, 0, 0);

 * On error handling and the GIL.  Gecko's event handlers run during
 * the message loop, which means the GIL is not held.  Calls back into
 * Python require that the GIL be reacquired.  If the Python call
 * fails, the GIL is released (while error_already_set is unwinding
 * the stack).  The GIL must be reacquired to grab the exception
 * information and marshal it to the main thread.
 * However, PumpWaitingMessages releases the GIL too!  Thus, reraising
 * the error on the main thread requires GIL reacquisition.
 * If another thread acquires the GIL and blocks on the main thread's
 * message pump, a deadlock will occur.  Thus, secondary threads
 * should never block on the main thread's message pump.

void reraisePythonError(PyObject* type, PyObject* value, PyObject* traceback) {
    HoldPythonGIL PP_UNIQUE_NAME();
    PyErr_Restore(type, value, traceback);

void error_but_no_error() {
    throw std::runtime_error("error_already_set but no Python error?");

void handlePythonError() {
    PyObject* type;
    PyObject* value;
    PyObject* traceback;
        HoldPythonGIL PP_UNIQUE_NAME();
        PyErr_Fetch(&type, &value, &traceback);
    if (type) {
        boost::function<void()> fn(boost::bind(reraisePythonError, type, value, traceback));
    } else {

void rethrowStandardException(const std::string& s) {
    std::string prefix("Unknown std::exception: ");
    throw std::runtime_error(prefix + s);

void handleStandardException(const std::exception& e) {
    if (boost::exception_detail::get_boost_exception(&e)) {
    } else {
        queueMainThreadJob(boost::bind(rethrowStandardException, std::string(e.what())));

Reporting Crashes in IMVU: C++ Call Stacks

Last time, we talked about including contextual information to help us
actually fix crashes that happen in the field. Minidumps are a great
way to easily save a snapshot of the most important parts of a running
(or crashed) process, but it’s often useful to understand the
low-level mechanics of a C++ call stack (on x86). Given some basic
principles about function calls, we will derive the implementation
of code to walk a call stack.

C++ function call stack entries are stored on the x86 stack, which
grows downward in memory. That is, pushing on the stack subtracts
from the stack pointer. The ESP register points to the
most-recently-written item on the stack; thus, push eax
is equivalent to:

sub esp, 4
mov [esp], eax

Let’s say we’re calling a function:

int __stdcall foo(int x, int y)

The __stdcall
calling convention pushes arguments onto the stack from right to left
and returns the result in the EAX register, so calling
foo(1, 2) generates this code:

push 2
push 1
call foo
; result in eax

If you aren’t familiar with assembly, I know this is a lot to absorb,
but bear with me; we’re almost there. We haven’t seen the
call instruction before. It pushes the EIP
register, which is the return address from the called function onto
the stack and then jumps to the target function.
If we didn’t store the instruction pointer, the called function would
not know where to return when it was done.

The final piece of information we need to construct a C++ call stack is
that functions live in memory, functions have names, and thus sections
of memory have names. If we can get access to a mapping of memory
addresses to function names (say, with the /MAP
linker option
), and we can read instruction pointers up the call
stack, we can generate a symbolic stack trace.

How do we read the instruction pointers up the call stack?
Unfortunately, just knowing the return address from the current
function is not enough. How do you know the location of the caller’s
caller? Without extra information, you don’t. Fortunately, most
functions have that information in the form of a function prologue:

push ebp
mov ebp, esp

and epilogue:

mov esp, ebp
pop ebp

These bits of code appear at the beginning and end of every function, allowing you
to use the EBP register as the “current stack frame”.
Function arguments are always accessed at positive offsets from EBP,
and locals at negative offsets:

; int foo(int x, int y)
; ...
[EBP+12] = y argument
[EBP+8]  = x argument
[EBP+4]  = return address (set by call instruction)
[EBP]    = previous stack frame
[EBP-4]  = local variable 1
[EBP-8]  = local variable 2
; ...

Look! For any stack frame EBP, the caller’s address is
at [EBP+4] and the previous stack frame is at [EBP].
By dereferencing EBP, we can walk
the call stack, all the way to the top!

struct stack_frame {
    stack_frame*  previous;
    unsigned long return_address;

std::vector<unsigned long> get_call_stack() {
    std::vector<unsigned long> call_stack;

    stack_frame* current_frame;
    __asm mov current_frame, ebp

    while (!IsBadReadPtr(current_frame, sizeof(stack_frame))) {
        current_frame = current_frame->previous;
    return call_stack;

// Convert the array of addresses to names with the aforementioned MAP file.

Yay, now we know how to grab a stack trace from any location in the
code. This implementation is not robust, but the concepts are
correct: functions have names, functions live in memory, and we can
determine which memory addresses are on the call stack. Now that you
know how to manually grab a call stack, let Microsoft do the heavy
lifting with the StackWalk64

Next time, we’ll talk about setting up your very own Microsoft Symbol Server so you can
grab accurate function names from every version of your software.

Reporting Crashes in IMVU: Call Stacks and Minidumps

So far, we’ve implemented reporting for Python exceptions that bubble
out of the main loop
, C++ exceptions that bubble into Python (and then
out of the main loop), and structured exceptions that bubble into
(and then out of the main loop.) This is a fairly
comprehensive set of failure conditions, but there’s still a big piece
missing from our reporting.

Imagine that you implement this error reporting and have customers try
the new version of your software. You’ll soon have a collection of
crash reports, and one thing will stand out clearly. Without the
context in which crashes happened (call stacks, variable values,
perhaps log files), it’s very hard to determine their cause(s). And
without determining their cause(s), it’s very hard to fix them.

Reporting log files are easy enough. Just attach them to the error
report. You may need to deal with privacy concerns or limit the size
of the log files that get uploaded, but those are straightforward

Because Python has batteries
, grabbing the call stack from a Python exception is
trivial. Just take a quick look at the traceback

Structured exceptions are a little harder. The structure of a call
stack on x86 is machine- and sometimes compiler-dependent.
Fortunately, Microsoft provides an API to dump the relevant process
state to a file such that it can be opened in Visual
or WinDbg,
which will let you view the stack trace and select other data. These
files are called minidumps, and they’re pretty small. Just call MiniDumpWriteDump
with the context of the exception and submit the generated file with your crash

Grabbing a call stack from C++ exceptions is even harder, and maybe
not desired. If you regularly use C++ exceptions for communicating
errors from C++ to Python, it’s probably too expensive to grab a call
stack or write a minidump every single time. However, if you want to
do it anyway, here’s one way.

C++ exceptions are implemented on top of the Windows kernel’s
structured exception machinery. Using the try and
catch statements in your C++ code causes the compiler to
generate SEH code behind the scenes. However, by the time your C++
catch clauses run, the stack has already been unwound. Remember
that SEH has three passes: first it runs filter expressions until it
finds one that can handle the exception; then it unwinds the stack
(destroying any objects allocated on the stack); finally it runs the
actual exception handler. Your C++ exception handler runs as the last stage,
which means the stack has already been unwound, which means you can’t
get an accurate call stack from the exception handler. However, we
can use SEH to grab a call stack at the point where the exception was
thrown, before we handle it…

First, let’s determine the SEH exception code of C++ exceptions
(WARNING, this code is compiler-dependent):

int main() {
    DWORD code;
    __try {
        throw std::exception();
    __except (code = GetExceptionCode(), EXCEPTION_EXECUTE_HANDLER) {
        printf("%X\n", code);

Once we have that, we can write our exception-catching function like

void throw_cpp_exception() {
    throw std::runtime_error("hi");

bool writeMiniDump(const EXCEPTION_POINTERS* ep) {
    // ...
    return true;

void catch_seh_exception() {
    __try {
    __except (
        (CPP_EXCEPTION_CODE == GetExceptionCode()) && writeMiniDump(GetExceptionInformation()),
    ) {

int main() {
    try {
    catch (const std::exception& e) {
        printf("%s\n", e.what());

Now we’ve got call stacks and program state for C++, SEH, and Python
exceptions, which makes fixing reported crashes dramatically easier.

Next time I’ll go into more detail about how C++ stack traces work,
and we’ll see if we can grab them more efficiently.