IMVU Crash Reporting: Stalls and Deadlocks

By mid-2006, we’d primarily focused on access violations and unhandled exceptions, the explosive application failures. After extensive effort, we got our client’s crash rate down to 2% or so, where 2% of all sessions ended in a crash.* Still the customers cried “Fix the crashes!”

It turns out that when a customer says “crash” they mean “it stopped doing what I wanted”, but engineers hear “the program threw an exception or caused an access violation”. Thus, to the customer, crash can mean:

  • the application was unresponsive for a period of time
  • the UI failed to load, making the client unusable
  • the application has been disconnected from the server

In short, any time the customer cannot make progress and it’s not (perceived to be) their fault, the application has crashed.

OK, we’ve got our work cut out for us… Let’s start by considering deadlocks and stalls.

First, some terminology: in computer science, a deadlock is a situation where two threads or processes are waiting for each other, so neither makes progress. That definition is a bit academic for our purposes. Let’s redefine deadlock as any situation where the program becomes unresponsive for an unreasonable length of time. This definition includes livelock, slow operations without progress indication, and network (or disk!) I/O that blocks the program from responding to input.

It actually doesn’t matter whether the program will eventually respond to input. People get impatient quickly. You’ve only got a few seconds to respond to the customer’s commands.

Detecting Deadlocks in C++

The embedded programming world has a “watchdog timer” concept. Your program is responsible for periodically pinging the watchdog, and if for several seconds you don’t, the watchdog restarts your program and reports debugging information.

Implementing this in C++ is straightforward:

  • Start a watchdog thread that wakes up every few seconds to check that the program is still responding to events.
  • Add a heartbeat to your main event loop that frequently pings the watchdog.
  • If the watchdog timer detects the program is unresponsive, record stack traces and log files, then report the failure.

IMVU’s CallStack API allows us to grab the C++ call stack of an arbitrary thread, so, if the main thread is unresponsive, we report its current stack every couple of seconds. This is often all that’s needed to find and fix the deadlock.

Detecting Deadlocks in Python

In Python, we can take the same approach as above:

  1. Start a watchdog thread.
  2. Ping the Python watchdog thread in your main loop.
  3. If the watchdog detects that you’re unresponsive, record the main thread’s Python stack (this time with sys._current_frames) and report it.

Python’s global interpreter lock (GIL) can throw a wrench in this plan. If one thread enters an infinite loop while keeping the GIL held (say, in a native extension), the watchdog thread will never wake and so cannot report a deadlock. In practice, this isn’t a problem, because the C++ deadlock detector will notice and report a deadlock. Plus, most common deadlocks are caused by calls that release the GIL: threading.Lock.acquire,,, and so on.

It might help to think of the Python deadlock detector as a fallback that, if successful, adds richer information to your deadlock reports. If it failed, whatever. The C++ deadlock detector is probably enough to diagnose and fix the problem.

What did we learn?

It turned out the IMVU client had several bugs where we blocked the main thread on the network, sometimes for up to 30 seconds. By that point, most users just clicked the close box [X] and terminated the process. Oops.

In addition, the deadlock detectors pointed out places where we were doing too much work in between message pumps. For example, loading some assets into the 3D scene might nominally take 200ms. On a computer with 256 MB of RAM, though, the system might start thrashing and loading the same assets would take 5s and report as a “deadlock”. The solution was to reducing the program’s working set and bite off smaller chunks of work in between pumps.

I don’t recall seeing many “computer science” deadlocks, but these watchdogs were invaluable in tracking down important failure conditions in the IMVU client.

Next time, we’ll improve the accuracy of our crash metrics and answer the question “How do you know your metrics are valid?”

* Median session length is a more useful reliability metric. It’s possible to fix crashes and see no change in your percentage of failed sessions, if fixing crashes simply causes sessions to become longer.

Fast Builds: Incremental Linking and Embedded SxS Manifests

As I’ve said before, fast builds are crucial for efficient development. But for those of us who use C++ regularly, link times are killer. It’s not uncommon to spend minutes linking your compiled objects into a single binary. Incremental linking helps a great deal, but, as you’ll see, incremental linking has become a lot harder in the last few versions of Visual Studio…

Linking an EXE or DLL is a very expensive operation — it’s roughly O(N) where N is the amount of code being linked. Worse, several optimizing linkers defer code generation to link time, exacerbating the problem! When you’re trying to practice TDD, even a couple seconds in your red-green-refactor iteration loop is brutal. And it’s not uncommon for large projects to spend minutes linking.

Luckily, Visual C++ supports an /INCREMENTAL flag, instructing relinks to modify the DLL or EXE in-place, reducing link time to O(changed code) rather than O(all code). In the olden days of Visual C++ 6, all you had to do was enable /INCREMENTAL, and bam, fast builds.

These days, it’s not so simple. Let’s take an excursion into how modern Windows finds DLL dependencies…

Side-by-Side (SxS) Manifests

Let’s say you’re writing a DLL foo.dll that depends on the CRT by using, say, printf or std::string. When you link foo.dll, the linker will also produce foo.dll.manifest. Windows XP and Vista use .manifest files to load the correct CRT version. (This prevents DLL hell: two programs can depend on different versions of the same DLL.)

Since remembering to carry around .manifest files is annoying and error-prone, Microsoft and others recommend that you embed them into your EXE or DLL as a resource:

mt.exe –manifest foo.dll.manifest -outputresource:foo.dll;2

Convenient, but it modifies the DLL in place, breaking incremental links! This is a known problem, and the “solutions” others suggest are INSANE. My favorite is the 300-line makefile with a note from the author “[If this does not work], please let me know ASAP. I will try fixing it for you.” Why doesn’t Visual Studio just provide an /EMBEDMANIFESTRESOURCE flag that would automatically solve the problem?!

I just want incremental linking and embedded manifests. Is that so much to ask? I tried a bunch of approaches. Most didn’t work. I’ll show them, and then give my current (working) approach. If you don’t care about the sordid journey, skip to the end.

What Didn’t Work

  • Not embedding manifests at all.

What went wrong: I could never figure out the rules where by manifest dependencies are discovered. If python.exe depends on the release CRT and your module DLL depends on the debug CRT, and they live in different directories (??), loading the module DLL would fail. Gave up.

  • Linking a temporary file (foo.pre.dll), making a copy (foo.pre.dll -> foo.dll), and embedding foo.pre.dll.manifest into foo.dll with mt.exe.

What went wrong: As far as I can tell, mt.exe is a terrible piece of code. In procmon I’ve watched it close file handles it didn’t open, causing permissions violations down the line. (?!) Sometimes it silently corrupts your EXEs and DLLs too. This may be a known weakness in UpdateResource. Yay! (Thanks to Kevin Gadd; he was instrumental in diagnosing these bugs.) mt.exe may or may not be fixed in recent Visual Studios. Either way, I’m convinced mt.exe has caused us several intermittent build failures in the past. Avoiding it is a good thing.

  • Linking to a temporary file (foo.pre.dll), generating a resource script (foo.pre.rc) from (foo.pre.dll.manifest), compiling said resource script (foo.pre.res), and including the compiled resource into the final link (foo.dll).

What went wrong: This approach is reliable but slow. Linking each DLL and EXE twice, even if both links are incremental, is often slower than just doing a full link to begin with.

  • Linking foo.dll with foo.dll.manifest (via a resource script, as above) if it exists. If foo.dll.manifest changed as a result of the link, relink.

I didn’t actually try this one because non-DAG builds scare me. I like the simplicity and reliability of the “inputs -> command -> outputs” build model. It’s weird if foo.dll.manifest is an input and an output of the link. Yes, technically, that’s how incremental linking works at all, but the non-DAG machinery is hidden in link.exe. From SCons’s perspective, it’s still a DAG.

Finally, a working solution:

For every build configuration {debug,release} and dependency {CRT,MFC,…}, link a tiny program to generate said dependency manifest. Compile manifest into a resource script (.rc -> .res) and link the compiled manifest resources into your other DLLs and EXEs.

This approach has several advantages:

  • These pre-generated manifest resources are created once and reused in future builds, with no impact to build time.
  • The build is a DAG.
  • We avoid letting mt.exe wreak havoc on our build by sidestepping it entirely.

I can think of one disadvantage – you need to know up-front on which SxS DLLs you depend. For most programs, the CRT is the only one. And hopefully understanding your dependencies isn’t a bad thing, though. ;)

After several evenings of investigation, we’re back to the same link times we had with Visual C++ 6! Yay!

The Code

If you care, here’s our SCons implementation of embedded manifests:

# manifest_resource(env, is_dll) returns a manifest resource suitable for inclusion into
# the sources list of a Program or SharedLibrary.
manifest_resources = {}
def manifest_resource(env, is_dll):
    if is_dll:
        resource_type = 2 #define ISOLATIONAWARE_MANIFEST_RESOURCE_ID 2
        resource_type = 1 #define CREATEPROCESS_MANIFEST_RESOURCE_ID  1

    is_debug = env['DEBUG'] # could use a 'build_config' key if we had more than debug/release
    del env

    def build_manifest_resource():
        if is_debug:
            env = baseEnv.Clone(tools=[Debug])
            env = baseEnv.Clone(tools=[Release])

        if is_dll:
            linker = env.SharedLibrary
            target_name = 'crt_manifest.dll'
            source = env.File('#/MSVC/crt_manifest_dll.cpp')
            linker = env.Program
            target_name = 'crt_manifest.exe'
            source = env.File('#/MSVC/crt_manifest_exe.cpp')

        env['OUTPUT_PATH'] = '#/${BUILDDIR}/${IMVU_BUILDDIR_NAME}/%s' % (target_name,)

        obj = env.SharedObject('${OUTPUT_PATH}.obj', source)
        result = linker([env.File('${OUTPUT_PATH}'), '${OUTPUT_PATH}.manifest'], obj)
        manifest = result[1]

        def genrc(env, target, source):
            [target] = target
            [source] = source
            # 24 = RT_MANIFEST
            file(target.abspath, 'w').write('%d 24 "%s"' % (resource_type, source.abspath,))

        rc = env.Command('${OUTPUT_PATH}.rc', manifest, genrc)
        res = env.RES('${OUTPUT_PATH}.res', rc)
        env.Depends(res, manifest)
        return res
    key = (is_debug, resource_type)
        return manifest_resources[key]
    except KeyError:
        res = build_manifest_resource()

        manifest_resources[key] = res
        return res

Fast Builds: Unintrusive Precompiled Headers (PCH)

Fast builds are critical to the C++ programmer’s productivity and happiness. One common technique for reducing build times is precompiled headers (PCH). There’s plenty of literature out there; I won’t describe PCH in detail here.

But one thing that’s always bothered me about PCH is that it affects your code. #pragma hdrstop and #include "StdAfx.h" everywhere. Gross.

I’m a strong believer in clean code without boilerplate, so can’t we do better? Ideally we could make a simple tweak to the build system and see build times magically improve. Enno enticed me with mentions of his fast builds, so I took a look…

Using PCH in Visual C++ requires a header (call it Precompiled.h) that includes all of the expensive dependencies:

#include <vector>
#include <map>
#include <iostream>
#include <fstream>
#include <boost/python.hpp>
#include <windows.h>
#include <mmsystem.h>

Additionally, we need a source file (let’s get creative and call it Precompiled.cpp), which is empty except for #include "Precompiled.h".

Compile Precompiled.cpp with /Yc Precompiled.h to generate Precompiled.pch, the actual precompiled header. Then, use the precompiled header on the rest of your files with /Yu Precompiled.h.

OK, here’s the step that prevented me from using PCH for so long: every single source file in your project must #include "Precompiled.h" on its first line.

That’s ridiculous! I don’t want to touch every file!

It turns out our savior is the /FI option. From the documentation:

This option has the same effect as specifying the file with double quotation marks in an #include directive on the first line of every source file specified on the command line […]

Exactly what we want!

But wait, doesn’t that mean every .cpp in our project will have access to every symbol included by the PCH? Yes. :( It’s worth the build speedup.

However, explicit physical dependencies are important, and the only way to prevent important things from breaking is by blocking commits if they fail. Since enabling and disabling PCH does not require any code changes, it’s easy enough to add a “disable PCH” option to your build system and run it on your continuous integration server:

Compile without PCH

If somebody uses std::string but forgets to #include <string>, the build will fail and block commits.

In the end, here’s the bit of SCons magic that lets me quickly drop PCH into a project:

def enable_pch(env, source_file, header):
        PCH, PCH_OBJ = env.PCH(source_file)
        env['PCH'] = PCH
        env['PCHSTOP'] = header
        env.Append(CPPFLAGS=['/FI' + header])
        return [PCH_OBJ]
        return [source_file]

Now you can benefit from fast builds with minimal effort and no change to your existing code!

Reporting Crashes in IMVU: Who threw that C++ exception?

It’s not often that I get to write about recent work. Most of the techniques in this series were implemented at IMVU years ago. A few weeks ago, however, a common C++ exception (tr1::bad_weak_ptr) starting intermittently causing crashes in the wild. This exception can be thrown in a variety of circumstances, so we had no clue which code was problematic.

We could have modified tr1::bad_weak_ptr so its constructor fetched a CallStack and returned it from tr1::bad_weak_ptr::what(), but fetching a CallStack is not terribly cheap, especially in such a frequently-thrown-and-caught exception. Ideally, we’d only grab a stack after we’ve determined it’s a crash (in the top-level crash handler).

Allow me to illustrate:

void main_function(/*arguments*/) {
    try {
        try {
            // We don't want to grab the call stack here, because
            // we'll catch the exception soon.
        catch (const std::exception& e) {
            // Yup, exception is fine.  Just swallow and
            // do something else.
    catch (const std::exception& e) {
        // Oh no! fallback_algorithm() failed.
        // Grab a stack trace now.

Almost! Unfortunately, the call stack generated in the catch clause doesn’t contain fallback_algorithm. It starts with main_function, because the stack has already been unwound by the time the catch clause runs.

Remember the structure of the stack:

Example Stack

We can use the ebp register, which points to the current stack frame, to walk and record the current call stack. [ebp+4] is the caller’s address, [[ebp]+4] is the caller’s caller, [[[ebp]]+4] is the caller’s caller’s caller, and so on.

What can we do with this information? Slava Oks at Microsoft gives the clues we need. When you type throw MyException(), a temporary MyException object is constructed at the bottom of the stack and passed into the catch clause by reference or by value (as a copy deeper on the stack).

Before the catch clause runs, objects on the stack between the thrower and the catcher are destructed, and ebp is pointed at the catcher’s stack frame (so the catch clause can access parameters and local variables).

From within the outer catch block, here is the stack, ebp, and esp:

Stack From Catch Clause

Notice that, every time an exception is caught the linked list of stack frames is truncated. When an exception is caught, ebp is reset to the stack frame of the catcher, destroying our link to the thrower’s stack.

But there’s useful information between ebp and esp! We just need to search for it. We can find who threw the exception with this simple algorithm:

	For every possible pointer between ebp and esp,
	find the deepest pointer p,
	where p might be a frame pointer.
	(That is, where walking p eventually leads to ebp.)

Or you can just use our implementation.

With this in mind, let’s rewrite our example’s error handling:

void main_function(/*arguments*/) {
    try {
        try {
        catch (const std::exception& e) {
            // that's okay, just swallow and
            // do something else.
    catch (const std::exception& e) {
        // oh no! fallback_algorithm() failed.
        // grab a stack trace - including thrower!
        Context ctx;
        ctx.ebp = findDeepestFrame(ctx.ebp, ctx.esp);

Bingo, fallback_algorithm appears in the stack:


Now we’ll have no problems finding the source of C++ exceptions!

IMVU’s CallStack API Now Open Source!

I’m proud to announce that IMVU has open-sourced its C++ CallStack API! It’s available under the MIT license at our SourceForge project. You can view the code here.

CallStack is a simple API for recording and displaying C++ call stacks on 32-bit Windows. To display the call stack at the current location:

printf("%s\n", CallStack::here().asString().c_str());

To grab a CallStack from an arbitrary thread:

HANDLE other_thread_handle = ...;
CallStack other_thread(other_thread_handle);

From a structured exception:

Context ctx;
CallStack cs;
__try {
	// broken code
__except (
	ctx = *(GetExceptionInformation())->ContextRecord),
) {
	// display cs.asString()

At first, the format of CallStack.asString() is a bit confusing, but with your symbol server it contains everything necessary to generate a symbolic call stack, including file names and line numbers.

Here is an example CallStack.asString() result:


The lines are formatted module_name#module_hash!offset. module_name is the name of the DLL or EXE in which the function lives. module_hash is a unique hash that identifies a build of a particular module. offset is the offset of the line of code in bytes from the start of the module. With this information, you can look up a function name and line number for each entry in a call stack.

Fortunately, we have a tool that automates this process:! Running it with the previous call stack on the clipboard produces this output:

	...t\python-2.5.1-src\objects\abstract.c (1860): PyObject_Call
	...0\libs\python\src\object\function.cpp ( 614): function_call
	...\boost\function\function_template.hpp ( 132): boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool,boost::python::detail::translate_exception<IMVUError,void (__cdecl*)(IMVUError const &)>,boost::_bi::list3<boost::arg<1>,boost::arg<2>,boost::_bi::value<void (__cdecl*)(IMVUError const

That last function name is pretty epic (as are most Boost or C++ function names), but notice that the call stack has accurate file names and line numbers.

The astute reader might ask “Don’t minidumps contain stack traces too?” The answer is yes, but minidumps are often inconvenient. Consider the common case:

  1. Open crash report
  2. Download mini.dmp to the desktop
  3. Open mini.dmp in Visual Studio
  4. Press F11
  5. Open the call stack debug window if it’s not open

With CallStack, we can shorten that to

  1. Open crash report
  2. Copy the call stack
  3. Run

Also, for reasons I don’t understand, sometimes Visual Studio fails to produce an informative stack when CallStack succeeds.

CallStack is a handy tool for debugging crashes from the wild, and I’m happy that we were able to make it available.

IMVU Crash Reporting: Plugging the VC++ Runtime’s Escape Hatches

Modern software is built on top of piles of third-party components. The holy grail of reusable code has manifested in the form of an open source ecosystem. That said, in a startup, you rarely have time to audit the third-party components you use. These third-party components might be ridiculous enough to call abort() on error. It may sound scary, but with a fixed amount of work, you can turn these calls into reported structured exceptions.

Unfortunately, the Visual C++ Runtime provides several functions that abnormally terminate the process without running our crash handlers. As a bonus, they usually include user-incomprehensible dialog boxes. Let’s see:

  • abort()
  • “pure virtual” method calls
  • throwing an exception from a destructor during unwind
  • stack buffer overflow (with /GS enabled)
  • standard C++ library index/iterator error with checked iterators enabled

Since you can never prove that you’ve implemented crash reporting for every way a third-party component can bypass your crash reporting, I’m just going to cover the ones we’ve implemented:


Result of calling abort()

Turning abort() into a structured exception is pretty straightforward. A quick read of the CRT source shows that abort() runs SIGABRT’s installed signal handler. It’s easy enough to install a custom handler that raises a structured exception:

void __cdecl onSignalAbort(int code) {
    // It's possible that this signal handler will get called twice
    // in a single execution of the application.  (On multiple threads,
    // for example.)  Since raise() resets the signal handler, put it back.
    signal(SIGABRT, onSignalAbort);

    RaiseException(EXC_SIGNAL_ABORT, 0, 0, 0);


// at program start:
signal(SIGABRT, onSignalAbort);

“Pure Virtual” Method Calls

Pure virtual function call
Pure virtual function call

Ever see a program fail with that useless “pure virtual function call” error message? This happens when a base class’s constructor tries to call a pure virtual method implemented by a derived class. Since base class constructors run before derived class constructors, the compiler fills the vtable for the derived class with references to _purecall, a function normally defined by the CRT. _purecall() aborts the process, sidestepping our crash reporting. Code might better elucidate this situation:

struct Base;
void foo(Base* b);

struct Base {
    Base() {
    virtual void pure() = 0;
struct Derived : public Base {
    void pure() { }

void foo(Base* b) {

Derived d; // boom

The fix is simple: just define a _purecall that shadows the CRT implementation:

int __cdecl _purecall() {
    RaiseException(EXC_PURE_CALL, 0, 0, 0);
    return 0;

Throwing an Exception from a Destructor During Unwind

C++ is aggressive about making sure you don’t throw an exception while another exception is in the air. If you do, its default behavior is to terminate your process. From MSDN: If a matching handler is still not found, or if an exception occurs while unwinding, but before the handler gets control, the predefined run-time function terminate is called. If an exception occurs after throwing the exception, but before the unwind begins, terminate is called.

To convert calls to terminate() (and unexpected(), for completeness) into structured exceptions, override the terminate handler with set_terminate (and set_unexpected):

void onTerminate() {
    RaiseException(EXC_TERMINATE, 0, 0, 0);
void onUnexpected() {
    RaiseException(EXC_UNEXPECTED, 0, 0, 0);

// at program start:

Standard C++ Library Index/Iterator Error with Checked Iterators Enabled

The IMVU client is compiled with _SECURE_SCL enabled. Increased reliability from knowing exactly where failures occur is more important than the very minor performance hit of validating all iterator accesses.

There are two ways to convert invalid iterator uses into reported exceptions. The easiest is compiling with _SECURE_SCL_THROWS=1. Otherwise, just install your own invalid_parameter handler with _set_invalid_parameter_handler.

Stack Buffer Overflow (with /GS enabled)

By default, Visual C++ generates code that detects and reports stack buffer overruns. This prevents a common class of application security holes. Unfortunately, the stock implementation of this feature does not allow you to install your own handler, which means you can’t report any buffer overruns.

Again, we can shadow a CRT function to handle these failures. From C:\Program Files\Microsoft Visual Studio 8\VC\crt\src\gs_report.c, copy the __report_gsfailure function into your application. (You did install the CRT source, didn’t you?) Instead of calling UnhandledExceptionFilter at the bottom of __report_gsfailure, call your own last-chance handler or write a minidump.

Testing Crash Reporting

Writing tests for the above reporting mechanisms is super fun. Take everything you’re told not to do and implement it. I recommend adding these crashes to your UI somewhere so you can directly observe what happens when they occur in your application. Here is our crash menu:

Crash Menu
Crash Menu

These types of failures are rare, but when we hit them, I’m glad we implemented this reporting.

You Won’t Learn This in School: Disabling Kernel Functions in Your Process

Detecting and reporting unhandled exceptions with SetUnhandledExceptionFilter seemed logical, and, in fact, it worked… for a while. Eventually, we started to notice failures that should have been reported as a last-chance exception but weren’t. After much investigation, we discovered that both Direct3D and Flash were installing their own unhandled exception filters! Worse, they were fighting over it, installing their handlers several times per second! In practice, this meant our last-chance crash reports were rarely generated, convincing us our crash metrics were better than they were. (Bad, bad libraries!)

It’s pretty ridiculous that we had to solve this problem, but, as Avery Lee says, “Just because it is not your fault does not mean it is not your problem.”

The obvious solution is to join the fray, calling SetUnhandledExceptionFilter every frame, right? How about we try something a bit more reliable… I hate implementing solutions that have obvious flaws. Thus, we chose to disable (with code modification) the SetUnhandledExceptionFilter function immediately after installing our own handler. When Direct3D and Flash try to call it, their requests will be ignored, leaving our exception handler installed.

Code modification… isn’t that scary? With a bit of knowledge and defensive programming, it’s not that bad. In fact, I’ll show you the code up front:

// If this doesn't make sense, skip the code and come back!

void lockUnhandledExceptionFilter() {
    HMODULE kernel32 = LoadLibraryA("kernel32.dll");

    if (FARPROC gpaSetUnhandledExceptionFilter = GetProcAddress(kernel32, "SetUnhandledExceptionFilter")) {
        unsigned char expected_code[] = {
            0x8B, 0xFF, // mov edi,edi
            0x55,       // push ebp
            0x8B, 0xEC, // mov ebp,esp

        // only replace code we expect
        if (memcmp(expected_code, gpaSetUnhandledExceptionFilter, sizeof(expected_code)) == 0) {
            unsigned char new_code[] = {
                0x33, 0xC0,       // xor eax,eax
                0xC2, 0x04, 0x00, // ret 4

            BOOST_STATIC_ASSERT(sizeof(expected_code) == sizeof(new_code));

            DWORD old_protect;
            if (VirtualProtect(gpaSetUnhandledExceptionFilter, sizeof(new_code), PAGE_EXECUTE_READWRITE, &old_protect)) {
                CopyMemory(gpaSetUnhandledExceptionFilter, new_code, sizeof(new_code));

                DWORD dummy;
                VirtualProtect(gpaSetUnhandledExceptionFilter, sizeof(new_code), old_protect, &dummy);

                FlushInstructionCache(GetCurrentProcess(), gpaSetUnhandledExceptionFilter, sizeof(new_code));

If that’s obvious to you, then great: We’re hiring!

Otherwise, here is an overview:

Use GetProcAddress to grab the real address of SetUnhandledExceptionFilter. (If you just type &SetUnhandledExceptionFilter you’ll get the relocatable import thunk, not the actual SetUnhandledExceptionFilter function.)

Most Windows functions begin with five bytes of prologue:

mov edi, edi ; 2 bytes for hotpatching support
push ebp     ; stack frame
mov ebp, esp ; stack frame (con't)

We want to replace those five bytes with return 0;. Remember that __stdcall functions return values in the eax register. We want to replace the above code with:

xor eax, eax ; eax = 0
ret 4        ; pops 4 bytes (arg) and returns

Also five bytes! How convenient! Before we replace the prologue, we verify that the first five bytes match our expectations. (If not, we can’t feel comfortable about the effects of the code replacement.) The VirtualProtect and FlushInstructionCache calls are standard fare for code modification.

After implementing this, it’s worth stepping through the assembly in a debugger to verify that SetUnhandledExceptionFilter no longer has any effect. (If you really enjoy writing unit tests, it’s definitely possible to unit test the desired behavior. I’ll leave that as an exercise for the reader.)

Finally, our last-chance exception reporting actually works!

Reporting Crashes in IMVU: Last-Chance Exceptions

So far, our crash reporting is looking pretty comprehensive. But what if there is a crash in the crash reporting itself? Or perhaps a crash on another thread (outside of any __try ... __except blocks)? Or, heaven forbid, we somehow cause Python itself to crash? In all of these situations, we can’t count on our Python crash reporting code to handle the error.

There are a couple ways to report these failures, and IMVU chose to implement a last-chance exception handler with SetUnhandledExceptionFilter. This handler runs whenever a structured exception bubbles out of any thread in the process. Unfortunately, you can’t run arbitrary code in the handler – maybe your process’s heap is corrupted and further attempts to allocate will cause another access violation.

By the way, the last-chance handler is your opportunity to run code just before the Windows “this program has performed an illegal operation” dialog.

Access Violation

In IMVU’s last-chance handler, we do what we can to save the state of the failure to disk. The next time the client starts, if failure data exists, the client submits it to the server. (Assuming the customer tries to run the client again.) Here is is the last-chance handler’s implementation:

LONG WINAPI handleLastChanceException(
) {
	DWORD error = writeMiniDump(
	if (error) {
		CRASHLOG("Failed to write minidump");

	// Try to record some additional system
	// info if we can.

	// Show the application error dialog.

Pretty easy! Again, try to restrict yourself to system calls in your last-chance handler.

When we first implemented this, we found out that the IMVU client was crashing a ton and we didn’t even know! Were I to start over from scratch, I’d implement the broadest crash detection possible first, and then implement narrower, more-detailed detection as necessary.

Next time I’ll talk about an unexpected shortcoming of this implementation. (Can you guess what it is?)

Reporting Crashes in IMVU: Creating Your Very Own Symbol Server

With minidumps or a bit of hand-rolled code, it’s pretty easy to report symbolic C++ stack traces whenever your application crashes. But reporting is just one side of the coin. Once you begin collecting crash reports, you’ll need a way to read them. As I mentioned before, you can generate some MAP files and look up your functions manually, but Microsoft provides some lesser-known tools that take care of this for you.

Why should I run my own symbol server?

  • Once you create a symbol server, you can easily debug a release build of your program from any computer with network access to the symbol server.
  • Existing tools such as WinDbg and Visual Studio play nicely with symbol servers.
  • Creating a symbol server is easy: it requires a single download and a directory on a network share.
  • Symbol servers can be cascaded. Microsoft runs a symbol server for their operating system DLLs, so entries from both your code and system DLLs will be displayed in a single call stack.

An example of a call stack containing IMVU code and Windows code.

What is a symbol server?

If you compile your program with the correct options, the compiler and linker will generate symbol files (PDBs). PDBs contain information required to parse stack traces and locate identifiers in your program. A PDB also contains the signature of the DLL or EXE associated with it. A symbol server is a directory of PDBs, organized by signature. Thus, given a DLL or EXE, you can find its PDB signature (the PdbSig70 and PdbAge fields in the corresponding debug PE header) and look up the PDB in
the symbol server directory.

Creating the symbol server

Download and install the Debugging
Tools for Windows
. Make sure symstore.exe is in your path.

Create a directory on an accessible network share. Note the full path; you’ll need it later. Ours is \\hydra\devel\SymbolServer.

Every time you release your software, add the generated PDBs to the symbol server. Our release script runs this command:

symstore.exe add /s \\hydra\devel\SymbolServer /compress /r /f *.pdb /t IMVUClient

This command takes a while to run, but it simply searches for PDBs and indexes them in the SymbolServer directory by their signature so Visual Studio and WinDbg can find them later.

Using the symbol server

There are generally two ways to coerce your debugger to resolve call stacks with your symbol server. You can either set the _NT_SYMBOL_PATH environment variable or configure the debugger with the paths directly.

The syntax for _NT_SYMBOL_PATH is a bit wonky, but it should look something like this:


_NT_SYMBOL_PATH behaves just like any other PATH: paths are searched in order until the appropriate PDB is found. To make future lookups faster, if a PDB is found on the network, it will be copied to the local path.

The last entry in the list is Microsoft’s public symbol server. This allows the debugger to find function names for Microsoft operating system DLLs.

Each debugger generally has its own mechanism for configuring symbol servers. Here’s Visual Studio 2005’s symbol path editor:

MSDev Symbol Path Configuration

To save time, you may prefer using _NT_SYMBOL_PATH over per-debugger configuration, since it will work with all debuggers.

I hope this information was useful. I certainly wish we’d discovered it years ago. Happy debugging!

Handling exceptions from XULRunner callbacks

(I was composing an e-mail to IMVU’s engineering team when I realized this information was generally applicable to anyone embedding XULRunner into their application. Hope it’s useful.)

XULRunner is written in a subset of C++ that we’ll call XPCOM. An embedded XULRunner window communicates back to the host application through XPCOM interfaces that the host implements. In IMVU, we generally use C++ exceptions to signal failure. On the other hand, XPCOM uses nsresult error codes. Specifically, XULRunner is not written to support C++ exceptions, nor is it compiled with them enabled. (Note that compiling with exceptions enabled is not sufficient to guarantee defined behavior when they’re thrown. You must use techniques like RAII to properly unwind and restore state if an exception is thrown.)

If XULRunner is calling into our code, and our code uses exceptions to signal failure, and throwing an exception through a XULRunner call stack results in undefined behavior, what do we do? This is the strategy I took:

In every method of every IMVU implementation of an XPCOM interface, I bracketed the function body with IMVU_BEGIN_DISALLOW_EXCEPTIONS_XPCOM and IMVU_END_DISALLOW_EXCEPTIONS_XPCOM. For example:

nsresult xpcom_method_runtime_error_with_error_info() {

These two macros generate a try ... catch clause that handles every C++ exception thrown from the body, returning NS_ERROR_UNEXPECTED to the caller.

If the exception thrown is a Python error (boost::python::error_already_set), then the Python exception is pulled (PyErr_Fetch) and scheduled to be reraised (PyErr_Restore) in the next iteration through the IMVU client’s message loop.

If the exception thrown is a C++ exception, we’d like to take the same approach. However, C++0x has not shipped, so there’s no built-in mechanism for transferring exceptions across contexts. Thus, we take advantage of the boost::exception framework to copy and rethrow the exception from the main message loop. Unfortunately, you can’t just “throw X()”. You have to use boost::throw_exception, which enables the machinery for current_exception() and rethrow_exception(). To enforce this requirement, I have modified our C++ exception hierarchy so that you must use X::throw_(arguments) instead of throw X(arguments).

If the exception thrown is a C++ exception but not a subclass of std::exception, then we catch it with catch (...) or std::uncaught_exception() in a sentry object’s destructor, and raise a structured exception to at least indicate that this is occurring in the field.

For reference, here is the implementation:

void handlePythonError();
void handleStandardException(const std::exception& e);

    DisallowExceptionsSentry PP_UNIQUE_NAME(); \

#define IMVU_END_DISALLOW_EXCEPTIONS(block)                             \
    catch (const boost::python::error_already_set&) {               \
        handlePythonError();                                            \
        block ;                                                         \
    }                                                                   \
    catch (const std::exception& e) {                               \
        handleStandardException(e);                                     \
        block ;                                                         \


#define IMVU_DISALLOW_EXCEPTIONS_XPCOM(block)                           \
    IMVU_BEGIN_DISALLOW_EXCEPTIONS_XPCOM {                              \
        block ;                                                         \

And the source file:

DisallowExceptionsSentry::~DisallowExceptionsSentry() {
    if (std::uncaught_exception()) {
        RaiseException(EXC_EXCEPTIONS_NOT_ALLOWED, 0, 0, 0);

 * On error handling and the GIL.  Gecko's event handlers run during
 * the message loop, which means the GIL is not held.  Calls back into
 * Python require that the GIL be reacquired.  If the Python call
 * fails, the GIL is released (while error_already_set is unwinding
 * the stack).  The GIL must be reacquired to grab the exception
 * information and marshal it to the main thread.
 * However, PumpWaitingMessages releases the GIL too!  Thus, reraising
 * the error on the main thread requires GIL reacquisition.
 * If another thread acquires the GIL and blocks on the main thread's
 * message pump, a deadlock will occur.  Thus, secondary threads
 * should never block on the main thread's message pump.

void reraisePythonError(PyObject* type, PyObject* value, PyObject* traceback) {
    HoldPythonGIL PP_UNIQUE_NAME();
    PyErr_Restore(type, value, traceback);

void error_but_no_error() {
    throw std::runtime_error("error_already_set but no Python error?");

void handlePythonError() {
    PyObject* type;
    PyObject* value;
    PyObject* traceback;
        HoldPythonGIL PP_UNIQUE_NAME();
        PyErr_Fetch(&type, &value, &traceback);
    if (type) {
        boost::function<void()> fn(boost::bind(reraisePythonError, type, value, traceback));
    } else {

void rethrowStandardException(const std::string& s) {
    std::string prefix("Unknown std::exception: ");
    throw std::runtime_error(prefix + s);

void handleStandardException(const std::exception& e) {
    if (boost::exception_detail::get_boost_exception(&e)) {
    } else {
        queueMainThreadJob(boost::bind(rethrowStandardException, std::string(e.what())));