Efficiently Rendering Flash in a 3D Scene

The original source of this post is at the IMVU engineering blog. Subscribe now!

Last time, I talked about how to embed Flash into your desktop application, for UI flexibility and development speed. This time, I’ll discuss efficient rendering into a 3D scene.

Rendering Flash as a 3D Overlay (The Naive Way)

At first blush, rendering Flash on top of a 3D scene sounds easy. Every frame:

  1. Create a DIB section the size of your 3D viewport
  2. Render Flash into the DIB section with IViewObject::Draw
  3. Copy the DIB section into an IDirect3DTexture9
  4. Render the texture on the top of the scene
Naive Flash Rendering

Ta da! But your frame rate dropped to 2 frames per second? Ouch. It turns out this implementation is horribly slow. There are a couple reasons.

First, asking the Adobe flash player to render into a DIB isn’t a cheap operation. In our measurements, drawing even a simple SWF takes on the order of 10 milliseconds. Since most UI doesn’t animate every frame, we should be able to cache the captured framebuffer.

Second, main memory and graphics memory are on different components in your computer. You want to avoid wasting time and bus traffic by unnecessarily copying data from the CPU to the GPU every frame. If only the lower-right corner of a SWF changes, we should limit our memory copies to that region.

Third, modern GPUs are fast, but not everyone has them. Let’s say you have a giant mostly-empty SWF and want to render it on top of your 3D scene. On slower GPUs, it would be ideal if you could limit your texture draws to the region of the screen that are non-transparent.

Rendering Flash as a 3D Overlay (The Fast Way)

Disclaimer: I can’t take credit for these algorithms. They were jointly developed over years by many smart engineers at IMVU.

First, let’s reduce an embedded Flash player to its principles:

  • Flash exposes an IShockwaveFlash [link] interface through which you can load and play movies.
  • Flash maintains its own frame buffer. You can read these pixels with IViewObject::Draw.
  • When a SWF updates regions of the frame buffer, it notifies you through IOleInPlaceSiteWindowless::InvalidateRect.

In addition, we’d like the Flash overlay system to fit within these performance constraints:

  • Each SWF is rendered over the entire window. For example, implementing a ball that bounces around the screen or a draggable UI component should not require any special IMVU APIs.
  • If a SWF is not animating, we do not copy its pixels to the GPU every frame.
  • We do not render the overlay in transparent regions. That is, if no Flash content is visible, rendering is free.
  • Memory consumption (ignoring memory used by individual SWFs) for the overlay usage is O(framebuffer), not O(framebuffer * SWFs). That is, loading three SWFs should not require allocation of three screen-sized textures.
  • If Flash notifies of multiple changed regions per frame, only call IViewObject::Draw once.

Without further ado, let’s look at the fast algorithm:

Fast Flash Rendering

Flash notifies us of visual changes via IOleInPlaceSiteWindowless::InvalidateRect. We take any updated rectangles and add them to a per-frame dirty region. When it’s time to render a frame, there are four possibilities:

  • The dirty region is empty and the opaque region is empty. This case is basically free, because nothing need be drawn.
  • The dirty region is empty and the opaque region is nonempty. In this case, we just need to render our cached textures for the non-opaque regions of the screen. This case is the most common. Since a video memory blit is fast, there’s not much we could do to further speed it up.
  • The dirty region is nonempty. We must IViewObject::Draw into our Overlay DIB, with one tricky bit. Since we’re only storing one overlay texture, we need to render each loaded Flash overlay SWF into the DIB, not just the one that changed. Imagine an animating SWF underneath another translucent SWF. The top SWF must be composited with the bottom SWF’s updates. After rendering each SWF, we scan the updated DIB for a minimalish opaque region. Why not just render the dirty region? Imagine a SWF with a bouncing ball. If we naively rendered every dirty rectangle, eventually we’d be rendering the entire screen. Scanning for minimal opaque regions enables recalculation of what’s actually visible.
  • The dirty region is nonempty, but the updated pixels are all transparent. If this occurs, we no longer need to render anything at all until Flash content reappears.

This algorithm has proven efficient. It supports multiple overlapping SWFs while minimizing memory consumption and CPU/GPU draw calls per frame. Until recently, we used Flash for several of our UI components, giving us a standard toolchain and a great deal of flexibility. Flash was the bridge that took us from the dark ages of C++ UI code to UI on which we could actually iterate.

How to Embed Flash Into Your 3D Application

The original source of this post is at the IMVU engineering blog. Subscribe now!

[I wrote this post last year when IMVU still used Flash for a significant portion of our UI. Even though we now embed Gecko, I believe embedding Flash is still valuable.]

Writing user interfaces is hard. Writing usable interfaces is harder. Yet, the design of your interface is your product.

Products are living entities. They always want to grow, adapting to their users as users adapt to them. In that light, why build your user interface in a static technology like C++ or Java? It won’t be perfect the first time you build it, so prepare for change.

IMVU employs two technologies for rapidly iterating on and refining our client UIs: Flash and Gecko/HTML. Sure, integrating these technologies has a sizable up-front cost, but the iteration speed they provide easily pays for them. Rapid iteration has some obvious benefits:

  1. reduces development cost
  2. reduces time to market

and some less-obvious benefits:

  1. better product/market fit: when you can change your UI, you will.
  2. improved product quality: little details distinguish mediocre products from great products. make changing details cheap and your Pinto will become a Cadillac.
  3. improved morale: both engineers and designers love watching their creations appear on the screen right before them. it’s why so many programmers create games!

I will show you how integrating Flash into a 3D application is easier than it sounds.

Should I use Adobe Flash or Scaleform GFx?

The two most common Flash implementations are Adobe’s ActiveX control (which has a 97% installed base!) and Scaleform GFx.

Adobe’s control has perfect compatibility with their tool chain (go figure!) but is closed-source and good luck getting help from Adobe.

Scaleform GFx is an alternate implementation of Flash designed to be embedded in 3D applications, but, last I checked, is not efficient on machines without GPUs. (Disclaimer: this information is two years old, so I encourage you to make your own evaluation.)

IMVU chose to embed Adobe’s player.

Deploying the Flash Runtime

Assuming you’re using Adobe’s Flash player, how will you deploy their runtime? Well, given Flash’s install base, you can get away with loading the Flash player already installed on the user’s computer. If they don’t have Flash, just require that they install it from your download page. Simple and easy.

Down the road, when Flash version incompatibilities and that last 5% of your possible market becomes important, you can request permission from Adobe to deploy the Flash player with your application.

Displaying SWFs

IMVU displays Flash in two contexts: traditional HWND windows and 2D overlays atop the 3D scene.

IMVU Flash Window
IMVU Flash Overlay

If you want to have something up and running in a day, buy f_in_box. Besides its awesome name, it’s cheap, comes with source code, and the support forums are fantastic. It’s a perfect way to bootstrap. After a weekend of playing with f_in_box, Dusty and I had a YouTube video playing in a texture on top of our 3D scene.

Once you run into f_in_box’s limitations, you can use the IShockwaveFlash and IOleInPlaceObjectWindowless COM interfaces directly. See Igor Makarav’s excellent tutorial and CFlashWnd class.

Rendering Flash as an HWND

For top-level UI elements use f_in_box or CFlashWnd directly. They’re perfectly suited for this. Seriously, it’s just a few lines of code. Look at their samples and go.

Rendering Flash as a 3D Overlay

Rendering Flash to a 3D window gets a bit tricky… Wait for Part 2 of this post!

IMVU Crash Reporting: Stalls and Deadlocks

By mid-2006, we’d primarily focused on access violations and unhandled exceptions, the explosive application failures. After extensive effort, we got our client’s crash rate down to 2% or so, where 2% of all sessions ended in a crash.* Still the customers cried “Fix the crashes!”

It turns out that when a customer says “crash” they mean “it stopped doing what I wanted”, but engineers hear “the program threw an exception or caused an access violation”. Thus, to the customer, crash can mean:

  • the application was unresponsive for a period of time
  • the UI failed to load, making the client unusable
  • the application has been disconnected from the server

In short, any time the customer cannot make progress and it’s not (perceived to be) their fault, the application has crashed.

OK, we’ve got our work cut out for us… Let’s start by considering deadlocks and stalls.

First, some terminology: in computer science, a deadlock is a situation where two threads or processes are waiting for each other, so neither makes progress. That definition is a bit academic for our purposes. Let’s redefine deadlock as any situation where the program becomes unresponsive for an unreasonable length of time. This definition includes livelock, slow operations without progress indication, and network (or disk!) I/O that blocks the program from responding to input.

It actually doesn’t matter whether the program will eventually respond to input. People get impatient quickly. You’ve only got a few seconds to respond to the customer’s commands.

Detecting Deadlocks in C++

The embedded programming world has a “watchdog timer” concept. Your program is responsible for periodically pinging the watchdog, and if for several seconds you don’t, the watchdog restarts your program and reports debugging information.

Implementing this in C++ is straightforward:

  • Start a watchdog thread that wakes up every few seconds to check that the program is still responding to events.
  • Add a heartbeat to your main event loop that frequently pings the watchdog.
  • If the watchdog timer detects the program is unresponsive, record stack traces and log files, then report the failure.

IMVU’s CallStack API allows us to grab the C++ call stack of an arbitrary thread, so, if the main thread is unresponsive, we report its current stack every couple of seconds. This is often all that’s needed to find and fix the deadlock.

Detecting Deadlocks in Python

In Python, we can take the same approach as above:

  1. Start a watchdog thread.
  2. Ping the Python watchdog thread in your main loop.
  3. If the watchdog detects that you’re unresponsive, record the main thread’s Python stack (this time with sys._current_frames) and report it.

Python’s global interpreter lock (GIL) can throw a wrench in this plan. If one thread enters an infinite loop while keeping the GIL held (say, in a native extension), the watchdog thread will never wake and so cannot report a deadlock. In practice, this isn’t a problem, because the C++ deadlock detector will notice and report a deadlock. Plus, most common deadlocks are caused by calls that release the GIL: threading.Lock.acquire, socket.read, file.read, and so on.

It might help to think of the Python deadlock detector as a fallback that, if successful, adds richer information to your deadlock reports. If it failed, whatever. The C++ deadlock detector is probably enough to diagnose and fix the problem.

What did we learn?

It turned out the IMVU client had several bugs where we blocked the main thread on the network, sometimes for up to 30 seconds. By that point, most users just clicked the close box [X] and terminated the process. Oops.

In addition, the deadlock detectors pointed out places where we were doing too much work in between message pumps. For example, loading some assets into the 3D scene might nominally take 200ms. On a computer with 256 MB of RAM, though, the system might start thrashing and loading the same assets would take 5s and report as a “deadlock”. The solution was to reducing the program’s working set and bite off smaller chunks of work in between pumps.

I don’t recall seeing many “computer science” deadlocks, but these watchdogs were invaluable in tracking down important failure conditions in the IMVU client.

Next time, we’ll improve the accuracy of our crash metrics and answer the question “How do you know your metrics are valid?”


* Median session length is a more useful reliability metric. It’s possible to fix crashes and see no change in your percentage of failed sessions, if fixing crashes simply causes sessions to become longer.

Fast Builds: Incremental Linking and Embedded SxS Manifests

As I’ve said before, fast builds are crucial for efficient development. But for those of us who use C++ regularly, link times are killer. It’s not uncommon to spend minutes linking your compiled objects into a single binary. Incremental linking helps a great deal, but, as you’ll see, incremental linking has become a lot harder in the last few versions of Visual Studio…

Linking an EXE or DLL is a very expensive operation — it’s roughly O(N) where N is the amount of code being linked. Worse, several optimizing linkers defer code generation to link time, exacerbating the problem! When you’re trying to practice TDD, even a couple seconds in your red-green-refactor iteration loop is brutal. And it’s not uncommon for large projects to spend minutes linking.

Luckily, Visual C++ supports an /INCREMENTAL flag, instructing relinks to modify the DLL or EXE in-place, reducing link time to O(changed code) rather than O(all code). In the olden days of Visual C++ 6, all you had to do was enable /INCREMENTAL, and bam, fast builds.

These days, it’s not so simple. Let’s take an excursion into how modern Windows finds DLL dependencies…

Side-by-Side (SxS) Manifests

Let’s say you’re writing a DLL foo.dll that depends on the CRT by using, say, printf or std::string. When you link foo.dll, the linker will also produce foo.dll.manifest. Windows XP and Vista use .manifest files to load the correct CRT version. (This prevents DLL hell: two programs can depend on different versions of the same DLL.)

Since remembering to carry around .manifest files is annoying and error-prone, Microsoft and others recommend that you embed them into your EXE or DLL as a resource:

mt.exe –manifest foo.dll.manifest -outputresource:foo.dll;2

Convenient, but it modifies the DLL in place, breaking incremental links! This is a known problem, and the “solutions” others suggest are INSANE. My favorite is the 300-line makefile with a note from the author “[If this does not work], please let me know ASAP. I will try fixing it for you.” Why doesn’t Visual Studio just provide an /EMBEDMANIFESTRESOURCE flag that would automatically solve the problem?!

I just want incremental linking and embedded manifests. Is that so much to ask? I tried a bunch of approaches. Most didn’t work. I’ll show them, and then give my current (working) approach. If you don’t care about the sordid journey, skip to the end.

What Didn’t Work

  • Not embedding manifests at all.

What went wrong: I could never figure out the rules where by manifest dependencies are discovered. If python.exe depends on the release CRT and your module DLL depends on the debug CRT, and they live in different directories (??), loading the module DLL would fail. Gave up.

  • Linking a temporary file (foo.pre.dll), making a copy (foo.pre.dll -> foo.dll), and embedding foo.pre.dll.manifest into foo.dll with mt.exe.

What went wrong: As far as I can tell, mt.exe is a terrible piece of code. In procmon I’ve watched it close file handles it didn’t open, causing permissions violations down the line. (?!) Sometimes it silently corrupts your EXEs and DLLs too. This may be a known weakness in UpdateResource. Yay! (Thanks to Kevin Gadd; he was instrumental in diagnosing these bugs.) mt.exe may or may not be fixed in recent Visual Studios. Either way, I’m convinced mt.exe has caused us several intermittent build failures in the past. Avoiding it is a good thing.

  • Linking to a temporary file (foo.pre.dll), generating a resource script (foo.pre.rc) from (foo.pre.dll.manifest), compiling said resource script (foo.pre.res), and including the compiled resource into the final link (foo.dll).

What went wrong: This approach is reliable but slow. Linking each DLL and EXE twice, even if both links are incremental, is often slower than just doing a full link to begin with.

  • Linking foo.dll with foo.dll.manifest (via a resource script, as above) if it exists. If foo.dll.manifest changed as a result of the link, relink.

I didn’t actually try this one because non-DAG builds scare me. I like the simplicity and reliability of the “inputs -> command -> outputs” build model. It’s weird if foo.dll.manifest is an input and an output of the link. Yes, technically, that’s how incremental linking works at all, but the non-DAG machinery is hidden in link.exe. From SCons’s perspective, it’s still a DAG.

Finally, a working solution:

For every build configuration {debug,release} and dependency {CRT,MFC,…}, link a tiny program to generate said dependency manifest. Compile manifest into a resource script (.rc -> .res) and link the compiled manifest resources into your other DLLs and EXEs.

This approach has several advantages:

  • These pre-generated manifest resources are created once and reused in future builds, with no impact to build time.
  • The build is a DAG.
  • We avoid letting mt.exe wreak havoc on our build by sidestepping it entirely.

I can think of one disadvantage – you need to know up-front on which SxS DLLs you depend. For most programs, the CRT is the only one. And hopefully understanding your dependencies isn’t a bad thing, though. ;)

After several evenings of investigation, we’re back to the same link times we had with Visual C++ 6! Yay!


The Code

If you care, here’s our SCons implementation of embedded manifests:

# manifest_resource(env, is_dll) returns a manifest resource suitable for inclusion into
# the sources list of a Program or SharedLibrary.
manifest_resources = {}
def manifest_resource(env, is_dll):
    if is_dll:
        resource_type = 2 #define ISOLATIONAWARE_MANIFEST_RESOURCE_ID 2
    else:
        resource_type = 1 #define CREATEPROCESS_MANIFEST_RESOURCE_ID  1

    is_debug = env['DEBUG'] # could use a 'build_config' key if we had more than debug/release
    del env

    def build_manifest_resource():
        if is_debug:
            env = baseEnv.Clone(tools=[Debug])
        else:
            env = baseEnv.Clone(tools=[Release])
        env['LINKFLAGS'].remove('/MANIFEST:NO')

        if is_dll:
            linker = env.SharedLibrary
            target_name = 'crt_manifest.dll'
            source = env.File('#/MSVC/crt_manifest_dll.cpp')
        else:
            linker = env.Program
            target_name = 'crt_manifest.exe'
            source = env.File('#/MSVC/crt_manifest_exe.cpp')

        env['OUTPUT_PATH'] = '#/${BUILDDIR}/${IMVU_BUILDDIR_NAME}/%s' % (target_name,)

        obj = env.SharedObject('${OUTPUT_PATH}.obj', source)
        result = linker([env.File('${OUTPUT_PATH}'), '${OUTPUT_PATH}.manifest'], obj)
        manifest = result[1]

        def genrc(env, target, source):
            [target] = target
            [source] = source
            # 24 = RT_MANIFEST
            file(target.abspath, 'w').write('%d 24 "%s"' % (resource_type, source.abspath,))

        rc = env.Command('${OUTPUT_PATH}.rc', manifest, genrc)
        res = env.RES('${OUTPUT_PATH}.res', rc)
        env.Depends(res, manifest)
        return res
    
    key = (is_debug, resource_type)
    try:
        return manifest_resources[key]
    except KeyError:
        res = build_manifest_resource()

        manifest_resources[key] = res
        return res

Fast Builds: Unintrusive Precompiled Headers (PCH)

Fast builds are critical to the C++ programmer’s productivity and happiness. One common technique for reducing build times is precompiled headers (PCH). There’s plenty of literature out there; I won’t describe PCH in detail here.

But one thing that’s always bothered me about PCH is that it affects your code. #pragma hdrstop and #include "StdAfx.h" everywhere. Gross.

I’m a strong believer in clean code without boilerplate, so can’t we do better? Ideally we could make a simple tweak to the build system and see build times magically improve. Enno enticed me with mentions of his fast builds, so I took a look…

Using PCH in Visual C++ requires a header (call it Precompiled.h) that includes all of the expensive dependencies:

#include <vector>
#include <map>
#include <iostream>
#include <fstream>
#include <boost/python.hpp>
#include <windows.h>
#include <mmsystem.h>

Additionally, we need a source file (let’s get creative and call it Precompiled.cpp), which is empty except for #include "Precompiled.h".

Compile Precompiled.cpp with /Yc Precompiled.h to generate Precompiled.pch, the actual precompiled header. Then, use the precompiled header on the rest of your files with /Yu Precompiled.h.

OK, here’s the step that prevented me from using PCH for so long: every single source file in your project must #include "Precompiled.h" on its first line.

That’s ridiculous! I don’t want to touch every file!

It turns out our savior is the /FI option. From the documentation:

This option has the same effect as specifying the file with double quotation marks in an #include directive on the first line of every source file specified on the command line […]

Exactly what we want!

But wait, doesn’t that mean every .cpp in our project will have access to every symbol included by the PCH? Yes. :( It’s worth the build speedup.

However, explicit physical dependencies are important, and the only way to prevent important things from breaking is by blocking commits if they fail. Since enabling and disabling PCH does not require any code changes, it’s easy enough to add a “disable PCH” option to your build system and run it on your continuous integration server:

Compile without PCH

If somebody uses std::string but forgets to #include <string>, the build will fail and block commits.

In the end, here’s the bit of SCons magic that lets me quickly drop PCH into a project:

def enable_pch(env, source_file, header):
    if PCH_ENABLED:
        PCH, PCH_OBJ = env.PCH(source_file)
        env['PCH'] = PCH
        env['PCHSTOP'] = header
        env.Append(CPPFLAGS=['/FI' + header])
        return [PCH_OBJ]
    else:
        return [source_file]

Now you can benefit from fast builds with minimal effort and no change to your existing code!

Visualizing Python Import Dependencies

In a large Python program such as IMVU, startup time is dominated by Python module imports. Take these warm timings:

$ time python -c 'None'

real    0m0.096s
user    0m0.077s
sys     0m0.061s

$ time python -c 'import urllib2'

real    0m0.314s
user    0m0.155s
sys     0m0.186s

That’s 300ms for a single basic dependency. Importing the entire IMVU client takes 1.5s warm and 20s cold on a typical user’s machine.

Loading

The IMVU client’s loading progress bar imports modules bottom-up; that is, leaf modules are imported before their parents. The root module is imported last.

Implementing a bottom-up import sequence requires generating a graph of dependencies between modules:

def get_dependencies(module_name):
    """\
    Takes a module name as input (e.g. 'xml.dom') and returns a set of
    (lhs, rhs) tuples where lhs and rhs are module names and lhs
    imports rhs.
    """
    
    # module_from_key is a dict from a module key, an arbitrary
    # object, to a module object.  While importing, we discover
    # dependencies before we have access to the actual module objects.
    
    # import_dependencies is a list of (lhs, rhs) tuples where lhs and
    # rhs are module keys, and module_from_key[lhs] imported
    # module_from_key[rhs].

    root_key = object()
    module_from_key = {root_key: __main__}
    import_dependencies = []
    stack = [root_key]

    def import_in_stack(key, name, globals, locals, fromlist, level):
        stack.append(key)
        try:
            return original_import(name, globals, locals, fromlist, level)
        finally:
            stack.pop()

    import __builtin__
    original_import = __builtin__.__import__

    def my_import(name, globals=globals(), locals=locals(), fromlist=[], level=-1):
        # fromlist is a whore.  Most of the complexity in this
        # function stems from fromlist's semantics.  See
        # http://docs.python.org/library/functions.html#__import__
        
        # If a module imports 'xml.dom', then the module depends on
        # both 'xml' and 'xml.dom' modules.
        dotted = name.split('.')
        for i in range(1, len(dotted)):
            my_import('.'.join(dotted[0:i]), globals, locals, [], level)

        module_key = object()
        parent_key = stack[-1]

        def add_dependency_from_parent(key, m):
            module_from_key[key] = m
            import_dependencies.append((parent_key, key))

        submodule = import_in_stack(module_key, name, globals, locals, ['__name__'], level)
        add_dependency_from_parent(module_key, submodule)

        for f in (fromlist or []):
            from_key = object()
            module = import_in_stack(from_key, name, globals, locals, [f], level)
            if f == '*':
                continue
            submodule = getattr(module, f)
            if isinstance(submodule, types.ModuleType):
                add_dependency_from_parent(from_key, submodule)

        return original_import(name, globals, locals, fromlist, level)

    # Import module_name with import hook enabled.
    original_modules = sys.modules.copy()
    __builtin__.__import__ = my_import
    try:
        my_import(module_name)
    finally:
        __builtin__.__import__ = original_import
        sys.modules.clear()
        sys.modules.update(original_modules)

    assert stack == [root_key], stack

    return sorted(set(
        (module_from_key[lhs].__name__, module_from_key[rhs].__name__)
        for lhs, rhs in import_dependencies
    ))

(You can view all of the code at SourceForge).

First, we install an __import__ hook that discovers import dependencies between modules, and convert those relationships into a directed graph:

xml.dom.minidom

Then, we merge cycles. If module A imports B, B imports C, and C imports A, then it doesn’t matter which you import first. Importing A is equivalent to importing B or C. After this step, we have a DAG:

xml.dom.minidom DAG

Finally, we can postorder traverse the DAG to determine a good import sequence and cost (approximated as the number of modules in the cycle) per import:

1 xml
3 xml.dom
1 copy_reg
1 types
1 copy
1 xml.dom.NodeFilter
1 xml.dom.xmlbuilder
1 xml.dom.minidom
1 __main__

Now let’s look at some less-trivial examples. urllib2:

urllib2

SimpleXMLRPCServer:

SimpleXMLRPCServer

imvu.task:

imvu.task

Final notes: Other people have solved this problem with bytecode scanning, but we wanted to know the actual modules imported for an accurate progress bar. A simpler __import__ hook could have calculate the correct import sequence, but I find a visual representation of module dependencies to have additional benefits.

Flushing the Windows Disk Cache

Occasionally, I want to test the performance of a program after a cold boot, or maybe after the computer has been idle for hours and the program has been paged out. For example, the IMVU client starts relatively quickly when the disk cache is warm, but at system boot, it can take quite a while for the login dialog to even appear. Iterating in these situations is a pain in the butt because you have to reboot or leave your computer idle for hours.

I’m sure there exists a program which will flush the disk caches and force programs out of memory and into the page file, but I can’t find it. So I wrote one.

First, a caveat: programs these days rarely handle out-of-memory situations, so running flushmem.exe might cause open applications to explode like popcorn. Buyer beware, etc.

After running flushmem.exe, you should find that your computer becomes painfully slow as applications are paged back into memory and the disk cache is refilled. Perfect. Now I can realistically simulate the experiences of our users.

You can download the program here or on the FlushMem page.

Implementation details: in Windows, each process has a 2 GB user mode address space limit by default. If physical memory + page file size is greater than 2 GB, flushmem spawns multiple processes. Each process allocates memory in 64 KiB chunks until it can’t anymore, and then writes to each page, forcing older pages out to the page file.

Reporting Crashes in IMVU: Who threw that C++ exception?

It’s not often that I get to write about recent work. Most of the techniques in this series were implemented at IMVU years ago. A few weeks ago, however, a common C++ exception (tr1::bad_weak_ptr) starting intermittently causing crashes in the wild. This exception can be thrown in a variety of circumstances, so we had no clue which code was problematic.

We could have modified tr1::bad_weak_ptr so its constructor fetched a CallStack and returned it from tr1::bad_weak_ptr::what(), but fetching a CallStack is not terribly cheap, especially in such a frequently-thrown-and-caught exception. Ideally, we’d only grab a stack after we’ve determined it’s a crash (in the top-level crash handler).

Allow me to illustrate:

void main_function(/*arguments*/) {
    try {
        try {
            // We don't want to grab the call stack here, because
            // we'll catch the exception soon.
            this_could_fail(/*arguments*/);
        }
        catch (const std::exception& e) {
            // Yup, exception is fine.  Just swallow and
            // do something else.
            fallback_algorithm(/*arguments*/);
        }
    }
    catch (const std::exception& e) {
        // Oh no! fallback_algorithm() failed.
        // Grab a stack trace now.
        report_crash(CallStack::here());
    }
}

Almost! Unfortunately, the call stack generated in the catch clause doesn’t contain fallback_algorithm. It starts with main_function, because the stack has already been unwound by the time the catch clause runs.

Remember the structure of the stack:

Example Stack

We can use the ebp register, which points to the current stack frame, to walk and record the current call stack. [ebp+4] is the caller’s address, [[ebp]+4] is the caller’s caller, [[[ebp]]+4] is the caller’s caller’s caller, and so on.

What can we do with this information? Slava Oks at Microsoft gives the clues we need. When you type throw MyException(), a temporary MyException object is constructed at the bottom of the stack and passed into the catch clause by reference or by value (as a copy deeper on the stack).

Before the catch clause runs, objects on the stack between the thrower and the catcher are destructed, and ebp is pointed at the catcher’s stack frame (so the catch clause can access parameters and local variables).

From within the outer catch block, here is the stack, ebp, and esp:

Stack From Catch Clause

Notice that, every time an exception is caught the linked list of stack frames is truncated. When an exception is caught, ebp is reset to the stack frame of the catcher, destroying our link to the thrower’s stack.

But there’s useful information between ebp and esp! We just need to search for it. We can find who threw the exception with this simple algorithm:

	For every possible pointer between ebp and esp,
	find the deepest pointer p,
	where p might be a frame pointer.
	(That is, where walking p eventually leads to ebp.)

Or you can just use our implementation.

With this in mind, let’s rewrite our example’s error handling:

void main_function(/*arguments*/) {
    try {
        try {
            this_could_fail(/*arguments*/);
        }
        catch (const std::exception& e) {
            // that's okay, just swallow and
            // do something else.
            fallback_algorithm(/*arguments*/);
        }
    }
    catch (const std::exception& e) {
        // oh no! fallback_algorithm() failed.
        // grab a stack trace - including thrower!
        Context ctx;
        getCurrentContext(ctx);
        ctx.ebp = findDeepestFrame(ctx.ebp, ctx.esp);
        report_crash(CallStack(ctx));
    }
}

Bingo, fallback_algorithm appears in the stack:

main_function
fallback_algorithm
__CxxThrowException@8
_KiUserExceptionDispatcher@8
ExecuteHandler@20
ExecuteHandler2@20
___CxxFrameHandler
___InternalCxxFrameHandler
___CxxExceptionFilter
___CxxExceptionFilter
?_is_exception_typeof@@YAHABVtype_info@@PAU_EXCEPTION_POINTERS@@@Z
?_CallCatchBlock2@@YAPAXPAUEHRegistrationNode@@PBU_s_FuncInfo@@PAXHK@Z

Now we’ll have no problems finding the source of C++ exceptions!

IMVU’s CallStack API Now Open Source!

I’m proud to announce that IMVU has open-sourced its C++ CallStack API! It’s available under the MIT license at our SourceForge project. You can view the code here.

CallStack is a simple API for recording and displaying C++ call stacks on 32-bit Windows. To display the call stack at the current location:

printf("%s\n", CallStack::here().asString().c_str());

To grab a CallStack from an arbitrary thread:

HANDLE other_thread_handle = ...;
CallStack other_thread(other_thread_handle);

From a structured exception:

Context ctx;
CallStack cs;
__try {
	// broken code
}
__except (
	ctx = *(GetExceptionInformation())->ContextRecord),
	cs.getFromContext(ctx),
	EXCEPTION_EXECUTE_HANDLER
) {
	// display cs.asString()
}

At first, the format of CallStack.asString() is a bit confusing, but with your symbol server it contains everything necessary to generate a symbolic call stack, including file names and line numbers.

Here is an example CallStack.asString() result:

PYTHON25.DLL#b57f5c3ff1b64eda861d97643831ce701!000266dc
boost_python.dll#507f2f0a5fd34e65af25e728d0be9ebb1!0000d4bf
_avatarwindow.pyd#5289bbd0ff9c4ceab5198308f99ef9271!0002f76a

The lines are formatted module_name#module_hash!offset. module_name is the name of the DLL or EXE in which the function lives. module_hash is a unique hash that identifies a build of a particular module. offset is the offset of the line of code in bytes from the start of the module. With this information, you can look up a function name and line number for each entry in a call stack.

Fortunately, we have a tool that automates this process: symbol_dump.py! Running it with the previous call stack on the clipboard produces this output:

PYTHON25.DLL#b57f5c3ff1b64eda861d97643831ce701!000266dc
	...t\python-2.5.1-src\objects\abstract.c (1860): PyObject_Call
boost_python.dll#507f2f0a5fd34e65af25e728d0be9ebb1!0000d4bf
	...0\libs\python\src\object\function.cpp ( 614): function_call
_avatarwindow.pyd#5289bbd0ff9c4ceab5198308f99ef9271!0002f76a
	...\boost\function\function_template.hpp ( 132): boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool,boost::python::detail::translate_exception<IMVUError,void (__cdecl*)(IMVUError const &)>,boost::_bi::list3<boost::arg<1>,boost::arg<2>,boost::_bi::value<void (__cdecl*)(IMVUError const

That last function name is pretty epic (as are most Boost or C++ function names), but notice that the call stack has accurate file names and line numbers.

The astute reader might ask “Don’t minidumps contain stack traces too?” The answer is yes, but minidumps are often inconvenient. Consider the common case:

  1. Open crash report
  2. Download mini.dmp to the desktop
  3. Open mini.dmp in Visual Studio
  4. Press F11
  5. Open the call stack debug window if it’s not open

With CallStack, we can shorten that to

  1. Open crash report
  2. Copy the call stack
  3. Run symbol_dump.py

Also, for reasons I don’t understand, sometimes Visual Studio fails to produce an informative stack when CallStack succeeds.

CallStack is a handy tool for debugging crashes from the wild, and I’m happy that we were able to make it available.

IMVU Crash Reporting: Plugging the VC++ Runtime’s Escape Hatches

Modern software is built on top of piles of third-party components. The holy grail of reusable code has manifested in the form of an open source ecosystem. That said, in a startup, you rarely have time to audit the third-party components you use. These third-party components might be ridiculous enough to call abort() on error. It may sound scary, but with a fixed amount of work, you can turn these calls into reported structured exceptions.

Unfortunately, the Visual C++ Runtime provides several functions that abnormally terminate the process without running our crash handlers. As a bonus, they usually include user-incomprehensible dialog boxes. Let’s see:

  • abort()
  • “pure virtual” method calls
  • throwing an exception from a destructor during unwind
  • stack buffer overflow (with /GS enabled)
  • standard C++ library index/iterator error with checked iterators enabled

Since you can never prove that you’ve implemented crash reporting for every way a third-party component can bypass your crash reporting, I’m just going to cover the ones we’ve implemented:

abort()

abort()
Result of calling abort()

Turning abort() into a structured exception is pretty straightforward. A quick read of the CRT source shows that abort() runs SIGABRT’s installed signal handler. It’s easy enough to install a custom handler that raises a structured exception:

void __cdecl onSignalAbort(int code) {
    // It's possible that this signal handler will get called twice
    // in a single execution of the application.  (On multiple threads,
    // for example.)  Since raise() resets the signal handler, put it back.
    signal(SIGABRT, onSignalAbort);

    RaiseException(EXC_SIGNAL_ABORT, 0, 0, 0);
}

...

// at program start:
signal(SIGABRT, onSignalAbort);

“Pure Virtual” Method Calls

Pure virtual function call
Pure virtual function call

Ever see a program fail with that useless “pure virtual function call” error message? This happens when a base class’s constructor tries to call a pure virtual method implemented by a derived class. Since base class constructors run before derived class constructors, the compiler fills the vtable for the derived class with references to _purecall, a function normally defined by the CRT. _purecall() aborts the process, sidestepping our crash reporting. Code might better elucidate this situation:


struct Base;
void foo(Base* b);

struct Base {
    Base() {
        foo(this);
    }
    virtual void pure() = 0;
};
struct Derived : public Base {
    void pure() { }
};

void foo(Base* b) {
    b->pure();
}

Derived d; // boom

The fix is simple: just define a _purecall that shadows the CRT implementation:

int __cdecl _purecall() {
    RaiseException(EXC_PURE_CALL, 0, 0, 0);
    return 0;
}

Throwing an Exception from a Destructor During Unwind

C++ is aggressive about making sure you don’t throw an exception while another exception is in the air. If you do, its default behavior is to terminate your process. From MSDN: If a matching handler is still not found, or if an exception occurs while unwinding, but before the handler gets control, the predefined run-time function terminate is called. If an exception occurs after throwing the exception, but before the unwind begins, terminate is called.

To convert calls to terminate() (and unexpected(), for completeness) into structured exceptions, override the terminate handler with set_terminate (and set_unexpected):

void onTerminate() {
    RaiseException(EXC_TERMINATE, 0, 0, 0);
}
void onUnexpected() {
    RaiseException(EXC_UNEXPECTED, 0, 0, 0);
}

// at program start:
set_terminate(onTerminate);
set_unexpected(onUnexpected);

Standard C++ Library Index/Iterator Error with Checked Iterators Enabled

The IMVU client is compiled with _SECURE_SCL enabled. Increased reliability from knowing exactly where failures occur is more important than the very minor performance hit of validating all iterator accesses.

There are two ways to convert invalid iterator uses into reported exceptions. The easiest is compiling with _SECURE_SCL_THROWS=1. Otherwise, just install your own invalid_parameter handler with _set_invalid_parameter_handler.

Stack Buffer Overflow (with /GS enabled)

By default, Visual C++ generates code that detects and reports stack buffer overruns. This prevents a common class of application security holes. Unfortunately, the stock implementation of this feature does not allow you to install your own handler, which means you can’t report any buffer overruns.

Again, we can shadow a CRT function to handle these failures. From C:\Program Files\Microsoft Visual Studio 8\VC\crt\src\gs_report.c, copy the __report_gsfailure function into your application. (You did install the CRT source, didn’t you?) Instead of calling UnhandledExceptionFilter at the bottom of __report_gsfailure, call your own last-chance handler or write a minidump.

Testing Crash Reporting

Writing tests for the above reporting mechanisms is super fun. Take everything you’re told not to do and implement it. I recommend adding these crashes to your UI somewhere so you can directly observe what happens when they occur in your application. Here is our crash menu:

Crash Menu
Crash Menu

These types of failures are rare, but when we hit them, I’m glad we implemented this reporting.