Recent Posts

Reference Counting Things Microsoft Sculpt Wired Conversion Mod Measuring and Coping with Indoor CO2 Measuring Storage Closet Temperature and Humidity with a Raspberry Pi Two Years at Dropbox New Site! Timestamps
  • Reference Counting Things

    Reference counting is cheap and easy. An integer starts at one, increments on every new reference, and whoever decrements it to zero is responsible for deallocation.

    If references are shared across threads, increments and decrements must be atomic.

    Decades ago, I wrote an audio library that shipped in a couple commercial games. Things you’d find on CD in the bargain bin at Walmart. The ABI was modeled after COM and most objects were reference-counted. At the time I’d never seen a dual-CPU system, and thought inc [refcount] and dec [refcount] are single instructions. It will be fine, right?!

    Dual-core didn’t yet exist, but some people had dual-socket boards, and we started seeing crash reports after the CDs were burned… oops.

    (On the bright side, since I was religious about maintaining stable ABIs, users could just drop the fixed DLL into place.)

    Cost of Atomics

    Atomics are more expensive than non-atomic operations. inc is a handful of cycles. lock inc even uncontended, can be dozens.

    When C++ standardized std::shared_ptr in 2011 they didn’t even bother with a non-atomic version. C++ isn’t safe enough, and there was a feeling that atomic increments and decrements were common enough that they’d get optimized in hardware. That was correct – it just took a while.

    Rust’s safety guarantees, on the other hand, allow safe use of an unsynchronized Rc if you don’t want to pay for Arc.

    It’s pretty easy for reference counting overhead to show up in profiles. Sometimes it’s the accidental shared_ptr copy in a hot loop or a recursive .clone() in Rust. Last time I wrote Swift, atomic reference counts were a major cost.

    The hardware is getting better. On Apple Silicon and AMD Zen 3, uncontended atomic increments and decrements are almost as cheap as non-atomic. (Interestingly, atomics are also cheap on my 64-bit, 4-thread Intel Atom from 2011.) These optimizations are a big deal, and if all CPUs worked that way, maybe this blog post would end here.

    Alas, data centers are still filled with years-old Intel CPUs or non-Apple ARM implementation. It’s worth spending some time in software to avoid synchronization if possible.

    Avoid 0-to-1

    Here’s an easy but commonly-missed trick. Initialize your reference counts to 1.

    For whatever reason (symmetry?), it’s common to see implementations like:

    struct Object {
      std::atomic<size_t> count{0};
    };
    
    struct ObjectPtr {
      ObjectPtr(Object* p): p{p} {
        p->count.fetch_add(1, std::memory_order_relaxed);
      }
      Object* p;
    };
    

    I haven’t seen a compiler realize it can replace the initial value with 1 and avoid atomics when new objects are allocated.

    Avoid 1-to-0

    A typical release implementation is written:

    struct ObjectPtr {
      ~ObjectPtr() {
        if (1 == p->count.fetch_sub(1, std::memory_order_acq_rel)) {
          delete p;
        }
      }
      Object* p;
    }
    

    However, actually decrementing the count to zero is not necessary. We only need to know if we’re the last reference. Thus, we can write:

      ~ObjectPtr() {
        if (1 == p->count.load(std::memory_order_acquire) ||
            1 == p->count.fetch_sub(1, std::memory_order_acq_rel)) {
          delete p;
        }
      }
    

    Maybe the impact on code size isn’t worth it. That’s your call. On older Intel CPUs, in situations where most objects only have one reference, it can be a meaningful optimization.

    Maged Michael implemented a fancier version of this algorithm in gcc’s libstdc++.

    Implementing these two optimizations in Watchman was a material win for code that allocated or deallocated large trees.

    Biased Reference Counting

    Swift implicitly reference-counts many of its objects. When I worked at Dropbox, we measured reference counting operations as a substantial portion of our overall CPU time.

    In 2018, researchers at University of Illinois Urbana-Champaign proposed an algorithm called Biased Reference Counting that splits the reference count into two. One is biased to a specific thread and can be updated without atomic operations. The other reference count is atomic and shared among the remaining threads. Unifying these two counts requires extra bookkeeping, especially in languages like Swift or C++ where unsynchronized values can easily migrate across threads.

    The hybrid_rc Rust crate has an implementation of this algorithm that takes advantage of Rust’s type system (in particular, by not providing Send for thread-local references) to avoid extra bookkeeping.

    I’m curious if anyone uses biased reference counting in practice.

    Split Reference Counting

    Channel and promise implementations need to track two reference counts: one for readers and one for writers. When either reaches zero, the channel is closed. Waiting senders or receivers are notified that no more messages can be sent.

    Rust’s built-in channels use two atomic counters and an atomic bit. The bit is necessary to determine which thread should deallocate in the case that a thread drops the last reader exactly as another thread drops the last writer.

    It’s possible to pack all of these into a single 64-bit counter. If each half has 32 bits but the entire counter is updated atomically, no additional state is required to disambiguate who deallocates.

    I have a Rust implementation of the above in the splitrc crate.

    How Many Bits?

    Rust is sound: safe code must not have undefined behavior. std::mem::forget is a safe function. Therefore, it’s possible to run up some reference count p in a tight loop such as:

    loop {
      std::mem::forget(p.clone());
    }
    

    64-bit counters are effectively infinite. Let’s hypothesize a 4 GHz CPU where increments take one cycle. It would take almost 150 years to overflow.

    In contrast, a modern CPU can overflow a 32-bit counter in seconds. You might say (and I’d agree) that a program that holds billions of references is pathological, and need not be supported. On the other hand, in Rust, safe code must never overflow and cause use-after-free.

    Therefore, any 32-bit counter (even usize and AtomicUsize on 32-bit CPUs) must detect and handle overflow.

    Rc uses usize::wrapping_add to detect wraparound. Arc reserves half the range of usize to detect overflow. This is safe under the assumption that billions of threads aren’t simultaneously incrementing the counter.

    Rust reference counts typically abort on overflow rather than panic. I assume this is because panics can be caught and ignored. There may be codegen benefits as well. However, in the context of long-lived server processes that concurrently handle requests, it’s nice to catch panics and fail the one buggy request instead of aborting.

    splitrc allocates a panic range and an abort range to get the best of both worlds.

    In practice, overflowing a reference count should never happen. Reference counts should never get that high. But that sounds like famous last words, and I’ll happily pay a branch and some cold code for a loud failure.

    Older versions of the Linux kernel even had a use-after-free caused by reference count overflow.

    Weak References

    Like split reference counts, supporting weak references requires maintaining two counts: a strong count and a weak count. When the strong count reaches zero, the referenced value is destructed. But the memory can’t be deallocated until both counts reach zero.

    The approach taken by Rust’s Arc is to maintain two separate counters. All strong references share an extra weak reference. When the last strong reference is dropped, the extra weak reference is dropped too.

    Memory is deallocated when the last weak reference reaches zero.

    libc++ takes a similar approach with the interesting caveat that it starts counting at zero and waits until the counts decrement to -1.

    Supporting weak references has a small cost. You need space for two counters and some implementations actually perform two atomic decrements when the last strong reference is dropped.

    It’s possible to do better: like splitrc, the strong and weak references can be packed into a single 64-bit integer with overflow detection on each half. Each new reference is a single atomic addition. As in the 1-to-0 optimization above, an optimistic load can avoid an atomic RMW in the common case that no weak references are alive.

    If you don’t need weak references, the Rust triomphe crate provides some faster alternatives to the standard library.

    Count First Reference From Zero or One?

    It’s typical for reference counts to start at one and decrement to zero. But that’s not the only option. As mentioned above, libc++ initializes its references to value zero, meaning one reference. Decrement checks whether the count underflows to -1.

    Unless you’re the most standard of libraries, the tiny differences in instruction selection don’t matter. But they’re fun to look at, so let’s see. (Compiler Explorer)

    Initializing values to zero is smaller in most ISAs:

    struct RC {
        size_t s;
        size_t w;
    };
    
    void init_zero(RC& rc) {
        rc.s = 0;
        rc.w = 0;
    }
    
    void init_one(RC& rc) {
        rc.s = 1;
        rc.w = 1;
    }
    

    x86-64 (gcc 13.2):

    init_zero(RC&):
            pxor    xmm0, xmm0
            movups  XMMWORD PTR [rdi], xmm0
            ret
    init_one(RC&):
            movdqa  xmm0, XMMWORD PTR .LC0[rip]
            movups  XMMWORD PTR [rdi], xmm0
            ret
    

    gcc chooses to load the pair of ones from a 128-bit constant. clang instead generates two stores.

    x86-64 (clang 17):

    init_zero(RC&):                       # @init_zero(RC&)
            xorps   xmm0, xmm0
            movups  xmmword ptr [rdi], xmm0
            ret
    init_one(RC&):                        # @init_one(RC&)
            mov     qword ptr [rdi], 1
            mov     qword ptr [rdi + 8], 1
            ret
    

    ARM64 gcc generates equivalent code to x86-64. clang on ARM64 instead broadcasts a constant 1 into a vector and stores it.

    64-bit ARM (clang 17):

    init_zero(RC&):                       // @init_zero(RC&)
            stp     xzr, xzr, [x0]
            ret
    init_one(RC&):                        // @init_one(RC&)
            mov     w8, #1                          // =0x1
            dup     v0.2d, x8
            str     q0, [x0]
            ret
    

    As expected, zero-initialization is slightly cheaper.

    Increment will generate the same instructions no matter where the count starts, of course. (If using a 32-bit counter, overflow checks are required. Choosing an overflow range that allows branching on the sign bit can generate a smaller hot path, but that’s almost independent of where to start counting.)

    Decrement is a little interesting.

    void dec_zero_exact(std::atomic<size_t>& c) {
        if (0 == c.fetch_sub(1, std::memory_order_acq_rel)) {
            dealloc();
        }
    }
    
    void dec_zero_less(std::atomic<size_t>& c) {
        using ssize_t = std::make_signed_t<size_t>;
        if (0 >= static_cast<ssize_t>(c.fetch_sub(1, std::memory_order_acq_rel))) {
            dealloc();
        }
    }
    
    void dec_one(std::atomic<size_t>& c) {
        if (1 == c.fetch_sub(1, std::memory_order_acq_rel)) {
            dealloc();
        }
    }
    

    Let’s look at x86-64:

    dec_zero_exact(std::atomic<unsigned long>&):    # @dec_zero_exact(std::atomic<unsigned long>&)
            mov        rax, -1
            lock xadd  qword ptr [rdi], rax
            test       rax, rax
            je         dealloc()@PLT                # TAILCALL
            ret
    dec_zero_less(std::atomic<unsigned long>&):     # @dec_zero_less(std::atomic<unsigned long>&)
            lock dec  qword ptr [rdi]
            jl        dealloc()@PLT                 # TAILCALL
            ret
    dec_one(std::atomic<unsigned long>&):           # @dec_one(std::atomic<unsigned long>&)
            lock dec  qword ptr [rdi]
            je        dealloc()@PLT                 # TAILCALL
            ret
    

    There are two atomic decrement instructions, lock dec and lock xadd. lock dec is slightly preferable: it has a similar cost, but its latency is one cycle less on Zen 4, and it’s smaller. (lock xadd also requires loading -1 into a register.)

    But, since it doesn’t return the previous value and only sets flags, it can only be used if a following comparison can use those flags.

    Therefore, on x86-64, counting from 1 is slightly cheaper, at least with a naive comparison. However, if we sacrifice half the range of the counter type (again, two billion should be plenty), then we can get the same benefits in the counting-from-zero decrement.

    Now let’s take a look at ARM64:

    dec_zero_exact(std::atomic<unsigned long>&):        // @dec_zero_exact(std::atomic<unsigned long>&)
            mov     x8, #-1                         // =0xffffffffffffffff
            ldaddal x8, x8, [x0]
            cbz     x8, .LBB2_2
            ret
    .LBB2_2:
            b       dealloc()
    dec_zero_less(std::atomic<unsigned long>&):         // @dec_zero_less(std::atomic<unsigned long>&)
            mov     x8, #-1                         // =0xffffffffffffffff
            ldaddal x8, x8, [x0]
            cmp     x8, #0
            b.le    .LBB3_2
            ret
    .LBB3_2:
            b       dealloc()
    dec_one(std::atomic<unsigned long>&):                // @dec_one(std::atomic<unsigned long>&)
            mov     x8, #-1                         // =0xffffffffffffffff
            ldaddal x8, x8, [x0]
            cmp     x8, #1
            b.ne    .LBB4_2
            b       dealloc()
    .LBB4_2:
            ret
    

    None of the atomic read-modify-writes on ARM64 set flags, so the value has to be explicitly compared anyway. The only difference is that comparing equality with zero is one fewer instruction.

    So there we go. All of this instruction selection is likely in the wash. I was hoping for a Dijkstra-like clear winner. The strongest argument to start counts at 1 is that the counter never underflows, allowing multiple counts to be packed into a single integer.

    False Sharing

    Where the reference count is positioned in the object can matter. If the reference count ends up in the same page as other modified data, concurrent workloads are penalized. By reducing false sharing and making RCU more scalable, Intel improved highly concurrent network performance in Linux by 2-130%.

    There may be value in abstracting the count’s location through a vtable, like COM does.

    I’ll Stop Here

    Reference counting is a long-studied topic. There are saturating counts, counts that saturate into mark-and-sweep, counts that saturate into (logged) leaks, cycle detection, weighted reference counts, deferred increments, combining updates, external counts, but you can read more elsewhere.

    I mostly wanted to share some things I’ve recently run into.

  • Microsoft Sculpt Wired Conversion Mod

    I made a control board for the Microsoft Sculpt wireless keyboard that converts it to wired USB, and now my favorite keyboard is even better.

    The finished and installed board.
    The finished and installed board.
    Wired keyboard and the resulting project mess!
    Wired keyboard and the resulting project mess!
    USB cable and reset button.
    USB cable and reset button.

    The QMK config is available at @chadaustin/qmk_firmware (keyboards/handwired/sculpt/), and the PCB design files at @chadaustin/wired-sculpt-pcb.

    I’m planning on making at least one more, so if you’d like one, maybe I can help.

    It’s a huge improvement. Latency is reduced by about 13 milliseconds, and with full control over the microcontroller’s firmware, you can customize keymaps and layers, and actually use the keyboard’s built-in LEDs.

    Why?

    Feel free to stop reading here — I am going to tell the sequence of events that led to this project. Besides some exposure to basic voltage and resistance circuits in college, I have very little electronics background. But, in a short time, I went from only barely knowing what a capacitor was to having a working PCB manufactured and assembled, and maybe this will inspire someone else to give it a try.

    Since developing RSI in college, I’ve exclusively used Microsoft’s ergonomic keyboards. And when I first tried the Sculpt, I instantly knew it was the best yet. The soft actuation, short key travel, and rigid frame are perfect for my hands. And because the number pad is a separate device, the distance to my mouse is shortened.

    My brother went out and bought one too. Not much later, he gave it to me, saying the latency was inconsistent and high, and it was unacceptable for gaming. I thought he was being uniquely sensitive, since I had no problem in either Linux, Windows 7, or macOS. But then I updated to Windows 10 and saw exactly what he meant.

    It was like the keyboard would go to sleep if a key wasn’t pressed for a few seconds, and the first keypress after a wake would be delayed or, worse, dropped.

    And heaven forbid I use my USB 3 hub, whose EMI would disrupt the 2.4 GHz signal, and every other keypress would be unreliable. I’d gone as far as mounting the wireless transceiver directly under my keyboard, on the underside of my desk, and keys were still dropped.

    So, best keyboard ever. But wireless sucks. (But mostly in Windows 10? No idea about that.)

    Over the Hump

    What started this whole thing is that the EdenFS team was a bunch of keyboard enthusiasts. During the pandemic, as we’re all at home burning out and missing each other, we were trying to think of some virtual team offsites. Wez offered to walk everyone through building a Sweet 16 Macro Pad.

    Assembled Sweet 16 underside
    Assembled Sweet 16 underside. This is take two, after resoldering and cleaning the whole thing. Take one was a bit of a mess.

    So, okay, a keyboard is a matrix, with some diodes used to disambiguate the signalling, and a microcontroller that rapidly polls the matrix and reports events over USB…

    So maybe I could fix the Sculpt! I bought a transceiver-less Sculpt off eBay for cheap and popped it open (thanks Emmanuel Contreras!), thinking maybe its controller could be flashed with new firmware that speaks USB. The Sculpt uses a Nordic Semiconductor nRF24LE1, but I was nowhere near capable of making use of that information at the time, though it did point me to Samy Kamkar’s horrifying guide on surreptitiously sniffing keystrokes from nearby (older) Microsoft wireless keyboards.

    I almost gave up here, but Per Vognsen suggested I scan the matrix myself and it turns out Michael Fincham had already mapped out the matrix and soldered a Teensy 2.0++ board onto the Sculpt’s test pads, showing this was doable!

    So I ordered my own microcontroller to try the same thing.

    First, I bought an Arduino Pro Micro, like the Sweet 16 uses. Oh hey, 18 GPIO pins isn’t enough to drive the Sculpt’s 26-pin matrix. I looked at using an I2C GPIO expander, but it felt like taking on too much.

    Arduino Pro Micro
    Arduino Pro Micro. Wait, you need pins to scan a matrix?

    More pins? QMK’s Proton C has more pins! So I carefully soldered onto the test pads as Michael had shown was possible… and it worked!

    QMK Proton C
    QMK Proton C. It's a beautiful board.
    Soldering test pads to Proton C.
    Soldering test pads to Proton C.
    All test pads connected to Proton C. It works!
    All test pads connected to Proton C. It works!

    Getting those wires to stick to the pads without shorting was tricky. (I hadn’t yet discovered how magical flux is.)

    The keyboard worked, but I couldn’t fit the board, its wires, and the new microcontroller into the case, and I wasn’t really happy leaving it in this state, even if I could pack it in somehow.

    I thought, all I really need is the ribbon cable connector, so I ordered a 30 pin, 1.0 mm pitch ribbon breakout and the pricier (but tons of pins!) Teensy 2.0++. Looking back, it’s cute that I was trying to save $10 on the microcontroller… You just have to get used to spending money on whatever saves you time.

    Ribbon cable breakout and Teensy 2.0++
    Ribbon cable breakout and Teensy 2.0++

    Well, it was almost as annoying to solder, and still didn’t fit. So much for saving money on microcontrollers.

    I thought about giving up. Is it really that bad that my keys don’t always register in games? Can I just tolerate some flakiness and latency?

    But Jon Watte offered to spend an entire day showing me how to use KiCad, design circuits, layout PCBs, select components on Digi-Key, scan datasheets for the important information, and how to work with a PCB manufacturing house. Of course you never turn down opportunities like that.

    Designing the Final Board - Schematic

    Assuming, like me, you’ve never done this, I’ll summarize the steps.

    First you sketch out the circuit schematic.

    Schematic
    Schematic in KiCad. Most of this was informed by the datasheet and Atmel's design guides.

    Jon showed me several tricks in KiCad, like global labels, and starting with some standard resistor and capacitor values, but it’s very important that you go through the datasheets, because details can matter a ton.

    I knew I wanted the main processor to be the AT90USB1286 controller, and fortunately KiCad already had a symbol for it. Atmel has a comprehensive and accessible data sheet, which showed me I needed some 22 Ω resistors on the USB data lines, which of the ISP programmer lines needed resistors (and appropriate values), and that I needed to either pull HWB low, or provide a physical switch that pulls it low, in order to allow rebooting the device into USB firmware update mode.

    There are a bunch of things that are implicitly known to electrical engineers but that were new to me. You want:

    • a ground plane under the data lines and most of the microcontroller if possible.
    • an electrolytic or tantalum bypass capacitor on the main 5V power from USB.
    • ceramic filter capacitors on each power pin.
    • appropriate values for the resonance capacitors on your crystal.
    • electrostatic discharge protection! Turns out transients are common and it’s easy to fry a chip just by plugging it in.

    And then when you get into concerns like EMI and high-frequency signal integrity, the rabbit hole goes deep.

    I kept having to tell myself “it’s just a keyboard”, but it also helped that there are a great number of high-quality resources on these topics just a click away. I spent lots of time on EEVBlog.

    Before finishing the circuit design, Jon had me do a couple smart things. In case the factory-supplied USB bootloader didn’t work out, he suggested I add the footprint (but not a connector!) for an ISP programmer and a debug LED to prove code would work at all.

    Designing the Final Board - Physical Layout

    After arranging the schematic and ensuring it passed the electrical rules check, it was time to pick specific components. That is, the reference to a 220 Ω resistor is replaced with the Panasonic ERJ-3EKF2200V, 0603 surface mount.

    There are a couple things to keep in mind. For common components, like resistors and ceramic capacitors, there is a huge amount of choice. For example, I see over 1400 surface-mount 220 Ω resistors on digikey. I tried to just stick with one high-quality brand like Panasonic or Samsung for all of that stuff.

    The important thing is the physical form factor, which determines the footprint on the board. Once you pick a part, it has a size, and you need to tell KiCad which physical footprint should be assigned to that component. I used 0603 resistors, so I assigned each resistor in the schematic the “Resistor_SMD:R_0603_1608Metric” footprint.

    Same for everything else. Jon showed me how to draw my own footprints, but to avoid complexity, I was able to find appropriate footprints in KiCad’s standard libraries for every component I needed.

    When you import the schematic into Pcbnew, it’s time to figure out where things go. Where are the edges of the board? Make careful measurements here. Where do the mounting holes go? Where do you want the microcontroller? Where do you want the USB port?

    Measuring dimensions and mounting holes
    Measuring dimensions and mounting holes

    Also, you have to pick through-hole sizes and trace widths. Jon had me use .250 mm for the narrow traces and .500 mm for the wider ones, presumably from experience. I used the narrow traces for signalling and wide traces for power, though I’ve since heard it’s a good idea to use narrow traces between filter capacitors and VBUS.

    Schematic
    PCB layout in KiCad

    Of course, there’s some iteration between the schematic and the PCB. After physically placing the ribbon cable connector and MCU, the traces all crossed over each other, so I had to reassign all the pins so it made sense physically.

    There are also physical constraints about how USB data lines are run, and how the electrostatic protection chip wants to be placed for the most protection.

    So, as simple as this board is, I spent a fair amount of time getting all of that right.

    I found myself getting lost in the abstractness of holes and traces and footprints, so it was helpful to ground myself by occasionally loading the PCB in KiCad’s 3D viewer.

    Schematic
    3D View

    Designing the Final Board - Manufacturing and Testing Physical Fit

    I tried to find a low-cost prototyping service in the USA, but it looks like China is still the best option if you want a PCB manufactured and assembled for an amount I’m willing to spend on a keyboard.

    I saw PCBWay recommended somewhere, and it seemed like a fine choice. Their site has tutorials that walk you through submitting your Gerber files in a way they can process.

    Before buying any components or doing assembly, I figured it would be smart to do a test order, just to physically look at the board and make sure it fit.

    Good thing, because it didn’t! The mounting holes were about half a millimeter off, and the clearance was tight enough that half a millimeter mattered.

    First board!
    First board!

    I couldn’t stop playing with it! It’s so magical to have the lines drawn in software turned into physical fiberglass and copper.

    Designing the Final Board - Assembly

    After making a couple adjustments and updating the version number and date on the silkscreen, I sent another order to PCBWay, this time requesting assembly service.

    Overall, I was impressed with their communication. They couldn’t get the specific LED I’d listed in my BOM and confirmed if a substitution was okay.

    Then, after all the parts were sourced, they asked me to clarify the polarity of the main tantalum bypass capacitor, since I’d forgotten to indicate anything on the silkscreen.

    Finally, before shipping the assembled board, they sent me high-resolution photos of each side and asked me to confirm orientations and assembly.

    Top of assembled board
    Top of assembled board
    Bottom of assembled board
    Bottom of assembled board

    It all looked correct to me, though I later noticed that one of the traces is lifted. (There is still connectivity, and it’s not a huge deal, as that trace is only connected to an LED that I haven’t gotten to work anyway.)

    It took about a month for the assembled board to arrive. I checked the assembly status every day. Maybe next time I’ll expedite. :)

    Overall, I was pretty happy:

    • My first test order, the minimum, was cheap and came with a cute battery-powered LED Christmas tree ornament.
    • They made my board even though it was technically smaller than their minimum size.
    • They took care of setting up the alignment holes for the pick-and-place machine, and sent me individual boards. I didn’t have to do any panelization.
    • Shipping from China seemed unreasonably fast, but I suppose that’s how things work these days.

    Electrical Testing

    The second revision fit in the case!
    The second revision fit in the case!

    Before powering anything, I carefully did an electrical connectivity test of the main power circuits. Wanted to make sure the first power-on wasn’t going to result in a puff of blue smoke.

    I briefly panicked, thinking everything was installed backwards, until I discovered my crappy little multimeter, in continuity mode, runs current from COM to positive. So I kept thinking there was a short somewhere on the board, and I’d have to disassemble to debug it! In reality, it was showing the ESD protection circuitry correctly shunting current from GND to VBUS.

    When I realized this and reversed the leads, everything was correct. (And I bought a nicer multimeter which doesn’t have this problem.)

    There was an electrical issue, however! Most of the pins on the ribbon cable connector weren’t soldered down to the board. I don’t know if this is a solder mask issue with the footprint in KiCad or if the board wasn’t flat enough for the paste on each pad to connect upwards to the leg.

    I was afraid of forming bridges between the 1 mm pitch fins, so I coated the entire area in flux and very carefully swiped solder upwards from the pad. It took three passes before I was able to measure reliable connectivity between each ribbon pin and the corresponding microcontroller leg.

    Resoldered FPC connector legs
    Resoldered FPC connector legs

    I see why people use microscopes for this stuff.

    Fuses and Firmware

    Now that everything seemed electrically correct, it was time to plug it in. Success! The factory-supplied DFU bootloader device showed up.

    Linux recognized the DFU bootloader device!
    Linux recognized the DFU bootloader device!

    With dfu-programmer, I uploaded a tiny C program that simply blinked the test LED pin at 1 Hz. First weirdness: the clock speed seemed to be incorrect. After some careful datasheet reading, long story short, the CLKDIV fuse bit comes preprogrammed, which divides your clock speed by 8. So the crystal was 16 MHz, but the MCU was dividing that down to 2 MHz. I had expected it to use the internal RC oscillator by default, which would have resulted in a 1 MHz clock.

    You can change the fuse bits with an in-circuit programmer device (not USB!), but that has the side effect of erasing the convenient factory-supplied USB bootloader, which I’d prefer to leave alone if possible. (There’s a LUFA bootloader you can upload, but since all of this was new, baby steps felt good.)

    Fortunately, for this device, none of the above actually matters! It turns out I can get away without programming any fuse bits. CLKDIV merely sets the default clock speed divisor, and you can change it in software at the start of your program:

    clock_prescale_set(clock_div_1);
    

    The result of all of this is that the six AVR ISP pins on the board are only necessary in emergencies. (Good thing, because I borrowed two of the pins later.) From the factory, it can be flashed with firmware and function as designed.

    QMK

    After getting the clock speed issues sorted, I flashed QMK — thanks again to Michael Fincham for mapping the layout — and it worked!

    The Sculpt treats left and right spacebar as independent keys. Michael took advantage of that and mapped right spacebar to enter. Turns out I couldn’t live with that, so I mapped it back to space.

    Now that it’s not necessary for the battery indicator, I repurposed the keyboard’s red LED for Caps Lock.

    I’d like to use the green LED too, but I discovered it has reversed polarity, and there’s no easy way to drive it with the current circuit.

    Finally, the Sculpt has a Caps Lock indicator
    Finally, the Sculpt has a Caps Lock indicator

    Case Fitting and Reassembly

    Dremel.

    Cut cut!
    Cut cut!

    The only complication here was realizing it would be super convenient to launch the bootloader without disassembling the keyboard, so I soldered RST and GND from the AVR ISP pins to a button and hot-glued that into the battery compartment. (HWB is pulled low on the board, so all external resets enter the bootloader.)

    To allow future disassembly, I cut up a PC fan extension cable and repurposed the connectors.

    Borrowing RST and GND pins
    Borrowing RST and GND pins
    External reset button. I almost forgot the spike-limiting resistor!
    External reset button. I almost forgot the spike-limiting resistor!
    All packed up!
    All packed up!

    Latency

    I don’t have enough words to convey how happy this modded keyboard makes me.

    After years of thinking I was just getting old and losing my dexterity, my computer feels solid again. It’s like a bunch of resistance disappeared. Gaming is easier. Typing is easier. Latency is definitely better, and perhaps more importantly, more consistent.

    I fired up Is It Snappy? and measured, on my PC, a total keyboard-to-screen latency reduction from 78 ms to 65. 13 milliseconds better!

    I’ll have to test it on my new work laptop, an MSI GS65 Stealth, which measures keypress-to-pixels latency under 30 ms (!).

    This project was worth every hour it took.

    And during my latency testing, the wireless keyboard repeatedly dropped keys, as if to validate all of my complaints in a final hurrah.

    Power

    While waiting for the assembled PCB to arrive from China, I modded my Wii sensor bar to take 100 mA from the TV USB and bump it up to the 7.5V required to light its infrared LEDs. I was worried about excessive current draw and potentially damaging the TV’s USB ports, so I picked up a USB meter.

    This keyboard draws about 60 mA — a quarter watt — which isn’t bad, but it feels possible to do better.

    USB power draw
    USB power draw

    The original wireless transceiver draws 20 mA under use and under 100 µA when idle. So I might play around with clocking down to 8 MHz and seeing what subsystems on the microcontroller can be turned off.

    With a switching regulator, I could even drop the MCU voltage to 3.3. And as awful as the wireless Sculpt’s sleep behavior was, there’s perhaps opportunity to improve there.

    I probably won’t push too hard. I almost never use a wired keyboard on a phone or laptop where it might make a small difference.

    Next Steps

    Besides reducing power, there are a few improvements I’d like to make:

    • The Fn switch (between function keys and volume/brightness/etc.) isn’t functional. I traced the membrane and discovered the switch controls whether pin 1 is pulled down by 47K or by half a megaohm. So I guess, assuming the membrane’s parasitic capacitance, I can detect the switch’s state by driving it high and measuring how long until it drops low.
    • The green LED has reversed polarity from the red! To drive them both at once, I’ll have to set the ground pin at maybe half VCC and treat red active-high and green active-low. That might complicate the Fn switch, since it’s pulled towards this same “ground” voltage. I haven’t figured out what the Microsoft circuit does.
    • Next time, I’ll put all LEDs on PWM pins. They’re a bit too bright, and breathing would be fun.
    • I’d like a better-fitting ribbon cable connector. One that closes more nicely onto the ribbon. And while I appreciate being forced to learn the magic of flux, it would be nice if it came correctly soldered to the pads.
    • Given tantalum is a rare earth metal, it might be nice to replace it with a ceramic and maybe a 4 Ω resistor in series. I’d love any input here.
    • I’ve always kind of wanted a USB 2 hub like old keyboards used to have? Hub controllers are only a couple bucks
    • Pretty up the traces on the board. :) They should look as clean as Nilaus makes his Factorio belts.

    Closing Thoughts

    This was a ton of fun. Electronics is so hands-on and as someone who never held interest in Maker Faire or any of that community, I get it now. Connecting software to the real world feels so empowering.

    What exploded my mind the most was how accessible hardware design is these days. Download the open source KiCad, pick some inexpensive parts on Digi-Key, send it off to a manufacturer, and for not that many dollars, you have a functioning board!

    Now I stare at duct vents and wonder if I could just hook up a microcontroller and a servo to reduce home energy usage.

  • Measuring and Coping with Indoor CO2

    I have two home offices. The one in the garage has lots of natural light, is separate from the family, but is only usable when the weather is sufficiently cool. The one in the house is in a tiny 100 sq. ft. bedroom. During this particularly hot summer, I mostly stayed indoors, and I noticed I’d get dizzy and sometimes nauseous in the afternoons.

    It’s even worse on days that my daughter has to use the office for school in the morning. So I bought a CO2 monitor.

    Autopilot CO2 Monitor
    AutoPilot APCEMDL Desktop CO2

    I bought that model because it writes values to a CSV file on a microSD card, so I could graph them if I wanted.

    Typical outdoor CO2 levels are around 415 ppm. The science is fuzzy, but it seems like CO2 levels above 1000 ppm start to negatively impact thinking, concentration, and comfort. I know that, on days where I can’t open the window because it’s too hot or smoky outside, my thinking gets super fuzzy to the point that I’m unable to work.

    Now that I can quantify the CO2 level in the room, with a single person and maybe the door cracked, the CO2 level will climb to about 1200 ppm.

    CO2 1100 ppm
    CO2 1100 ppm

    If I’m giving a presentation or conducting an interview, it might even reach north of 2000.

    CO2 1750 ppm
    CO2 1750 ppm

    If it got too high, no matter the conditions outside, I’d crack a window in the room I was in. Even a small crack was enough to help some. Does gas diffuse that quickly?

    I did make a trip to a nursery to purchase some indoor goldon pothos plants, but it’s not clear if they help in a material way. I don’t think I have controlled enough conditions to measure the effect of two small plants. (I will say that the goal is not for the plant to produce oxygen, as humans consume far more oxygen than a small plant can produce, but instead to absorb CO2.)

    Even the entire house might have elevated CO2 levels. With the doors and windows shut most of the day (wildfire season), and everyone home (pandemic), the CO2 level in the house stayed around 1500 for weeks at a time. And my children were randomly vomiting in the mornings, with no other symptoms. Related? Caused by smoke seeping in? Something else? Not sure.

    It climbs high in a bedroom during the night, too.

    But here’s a surprising case: every time we’d use our gas stove, oven, or dryer, the CO2 level in the house would spike above 2000 ppm. This makes sense, since the outputs of burning methane are water vapor and CO2. During bad air quality days, when we had to keep the windows shut, we tried to only use electric cooking appliances: the rice cooker, bread maker, toaster oven, and crock pot.

    When we bought the house, natural gas was the cheap, clean source of fuel. Now, I’m wishing we’d gone electric for at least the dryer and water heater, especially when we install solar panels.

    Someone recently asked me “Hey, how is your CO2 problem?” I laughed a bit, because it’s not my CO2 problem – it’s all of ours. Especially in modern, insulated and well-sealed homes. I’m just measuring it. And it’s only going to get worse with climate change. In the past 60 years, humans have increased atmospheric CO2 levels from 315 ppm to 415 ppm, and it’s expected to reach 700-900 ppm in my childrens’ lifetimes. That means indoor CO2 levels will stay above levels that affect human cognition. (Climate change depresses the shit out of me, and this is yet another reason why.)

    There are cities that are banning new construction of residental natural gas lines. When I first heard of that, it seemed crazy, but it doesn’t anymore. I will miss cooking eggs on a gas stove, but maybe that can be solved with a single burner fed from a tank of biogas, or something.

  • Measuring Storage Closet Temperature and Humidity with a Raspberry Pi

    We live in Silicon Valley, which means our house is too expensive and too small. So, unlike my parents’ old house in the midwest, we don’t have the luxury of a basement with consistent year-round temperature and humidity for long-term storage.

    We moved some of our long-term storage into the house, some into the attic, and some into the garage. But I’ve been concerned about temperature and humidity swings, especially during the rainy season and when it gets extremely hot in late summer.

    I bought a steel storage closet from Uline, a Raspberry Pi Zero, and an AM2302 temperature-humidity sensor. The AM2302 is just a DHT22 with the pull-up resistor built-in, so installing it is as simple as soldering three wires onto the Raspberry Pi.

    Reading the Sensor

    Then the question became how to read it from software. Standard tutorials suggest using Adafruit’s Python DHT module. It works, reading the sensor every couple seconds will consume nearly the entire Raspberry Pi Zero’s single core. That’s because it uses Linux’s memory-mapped GPIO interface to communicate with the sensor. It sets scheduling priority to realtime, bit-bangs the GPIO pin, and then busy-loops to poll for signal edges.

    (The DHT22 uses a bespoke one-pin signaling protocol where bits are distinguished by whether voltage is held high for shorter than or longer than about 40 microseconds. The idea is that you measure the time between all 80-some edges and compute bits.)

    I had hoped to use the CPU for things in addition to busy-polling, so I wondered if there was a more efficient way to read the sensor. Well it turns out the kernel now supports a character device interface using ioctls that delivers edge transitions as a stream of events that can be read. Sounds perfect! Unfortunately, even with a minimal C program, transitions were being missed, and I couldn’t reliably parse a result packet. My guess is the kernel isn’t polling the pin signal frequently enough, so transitions are dropped.

    Polling the pin in userspace is too expensive, and the gpio character device interface didn’t work, but fortunately recent kernels include an IIO (Industrial I/O) driver that speaks the DHT11/DHT22 protocol. The driver registers an interrupt timer to reliably poll the signal at a high enough frequency to read every transition, and exposes the results as files in sysfs. Reading values with the dht11 driver on the Raspberry Pi means the CPU stays almost entirely idle, and I don’t have to worry about scheduling other processes.

    Recording and Graphing Values

    The popular time series database and graphing solution seems to be InfluxDB and Grafana these days, but after going through the setup tutorial for those, I decided I didn’t want to deal with containers and security updates and running complicated software on my home network. Given my very limited free time these days, I optimize for systems with extremely low maintenance costs, even if they require more up-front work. (This happens to be why I replaced WordPress with something based on Jekyll, too.)

    Thus, I wrote a tiny HTTP server with Hyper that simply writes recorded values and their timestamps to CSV files.

    Then, a separate program invokes gnuplot to output PNG graphs into a dedicated folder on my NAS.

    Temperature and Humidity
    Example temperature and humidity graph

    Simple, solves for my needs, and gives me the option to import data from CSV into InfluxDB and Grafana later, if desired.

    Next Steps

    Next, more sensors! Besides temperature and humidity from various parts of the house, recording indoor carbon dioxide would be useful.

    I also need to automate the creation of SD card images. So far, I’ve manually assigned IP addresses and configured systemd units on the NAS and each device, but that’s getting unwieldy, especially since SD cards are likely the first thing to fail on a Pi.

    Now that I have reliable data, I’ll probably experiment with placing insulation between the storage unit and the wall, and placing some closed bags of charcoal inside the (enclosed) unit to see if it reduces humidity movement through the day.

  • Two Years at Dropbox

    Disclaimer

    This post is a collection of stories from my time at Dropbox. Inevitably, someone will read too much into it and come away with some overgeneralized lesson, but keep in mind that I was only there for two pre-IPO years and only exposed to a couple specific areas corners of the company.

    I certainly don’t regret my time there - my coworkers were amazing and I learned a lot about myself. In fact, this post says more about me than it does about the company.

    And two years is the average Silicon Valley tenure, right?

    The Interview

    I’d been at IMVU almost ten years (an eternity by Silicon Valley standards!) and realized a year would disappear without me noticing. It was time for something new. I knew someone at Dropbox, and I’d met some more smart people at CppCon 2014, so when they reached out to see if it made sense for me to join, I didn’t say no.

    The interview process was confused. After dinner introductions and recruiter phone calls, I was first invited up to San Francisco to meet with a handful of product division leads. Next, they scheduled a proper technical interview. After passing what ended up being a short half-day round of trivial whiteboard problems, I was annoyed to find that I’d have to take another day off and come back for a second round. After passing those, they invited me again to meet with more product leads. By this point I’d met with something like 14 people. Finally, in frustration, I asked “What are we doing? It costs me real money to take days off, so do you want to offer me a job or not?” They did.

    At the time, it seemed like a good offer – relative to IMVU, a doubling in total compensation! In hindsight, though, I should have negotiated higher. More on that later.

    My excitement started to build. IMVU was a great engineering organization, but the product direction was weak and aimless. And from the interview alone, I could tell Dropbox had an amazing product culture; I had to see how it worked from the inside. The fact that the Dropbox client was 10 years old and remained a simple and refined UX said a lot – any other company would have peppered the product with random features. As a random example of what I mean, Excel currently lists “Bing Maps” in the ribbon before “Recommended Charts”, letting its internal turf wars trump the user experience.

    Also, during the interview process, I got a sneak peak of what’s now called Dropbox Paper. Within minutes, I instantly understood the product, what problem it solved, and why people would want it. Not only that, but it was clean and delightful in a way that other online collaboration tools weren’t. I knew then that Dropbox’s product culture had magic, so I had to see it from the inside.

    Yellow Flags

    Between accepting and starting, three people independently reached out and said “Don’t join Dropbox. You’re making the wrong decision.” One person said the culture wasn’t good – politics and currying favor with an old boys’ network. Another said the commute to San Francisco would do me in, especially since I was having my third child. A third was concerned that I was joining a company too late in its life. Dropbox had already grown substantially and the last round of investment valued the company at $10B. I should buy low instead of high. I decided to join anyway.

    Regarding culture: I’m pleased to say that, while it may have been the case that Dropbox was a frat house in its early years, they had intentionally and decisively solved that problem by the time I joined. Like Facebook, they implemented something like the Rooney Rule in the hiring process. In addition, everyone was encouraged (required?) to take unconscious bias training. It was surprisingly valuable, and it made me notice the dozens of ways that an interview can be biased without realizing it. I felt that Dropbox did a good job of actively striving to make the workplace and hiring process as inclusive and bias-free as you can reasonably hope for.

    The last point, that Dropbox was valued too highly, was probably correct. Shortly after I joined, major investors wrote the stock down about 40%. Oof. (I had factored a possible 50% write-down into the evaluation of my offer, but I realize now, given the illiquidity, I should have pushed the equity component of my offer much higher. Again, more on compensation later.)

    Onboarding

    After IMVU, I was so excited to jump into something new and get the clock rolling on the vesting schedule, I took no time off between IMVU and Dropbox. For anyone reading this: please don’t. Take time off, if only to reset your mind.

    Either way, the recruiter had said my chosen start date would be okay. That was a lie - in reality, Dropbox only onboarded new employees every other Tuesday. This left me without health insurance coverage for a week between jobs. Fortunately, none of the kids got hurt that week. Always quit your previous job at the beginning of the month so you are covered until the new job starts.

    The initial few days of hardware, account setup, and security training went smoothly. Given that IMVU did nothing to onboard people into the culture, this was a big improvement. (But it paled in comparison to what I’d later see in Facebook’s onboarding process.)

    My First Team: Sonoma

    During the interview, the pitch was that I could slot in as the tech lead for the Paper backend team. Its current tech lead was leaving Dropbox. Not everyone on the team knew that yet, and I didn’t know it was secret, so there was an awkward moment when I said to the current lead in a group interview “So where are you headed next?” and he said “I don’t know what you’re talking about.” Oops.

    Before I joined, however, the Paper backend team moved to NYC, so I was instead assigned to a product prototyping team. The team’s average age was in the low 20s and my grandboss wanted someone with experience on how projects play out.

    The prototype’s current iteration was a mess. It was built atop other, failed prototypes, which in turn were built on real shipping features, so you never quite knew which code was alive or dead.

    And Dropbox’s deployment cadence was daily, except that half the time the daily push would fail, meaning we weren’t able to test hypothesis very rapidly. Relative to IMVU’s continuous deployment, this was jarring, but it’s also just not a good way to develop new products.

    Iteration Speed

    Coming from IMVU, my expectations around developer velocity were extremely high. If a unit test took a second to run, that was considered a failure on our part. (It doesn’t mean all of our tests took under a second, but we pushed hard.) We also deployed to production with every commit to master.

    So I was shocked to find that running an empty Python unit test at Dropbox took tens of seconds. Worse, when you fixed a bug and landed it, depending on how the daily push went, it might make it to production within a day? Two? Maybe more? Compared to IMVU, this workflow was unacceptable, especially for a prototyping team that was trying to find product market fit as fast as possible. One day, after struggling to get the simplest diff landed, the frustration overflowed; I stood up and exploded “HOW THE FUCK DOES ANYONE GET ANYTHING DONE AROUND HERE?!?”

    The iteration speeds were bad at every scale. Unit tests were slow; uploads to the code review tool were sluggish; the reviewer might look at your diff after two or three days; and finally the aforementioned deployment issues. The net result is that simple changes took days, so you had to pipeline a lot of work in parallel, resulting in constant context switching. (This was especially painful for someone like me with high context switch overhead. I prefer to dive deep on a problem, move fast, and then come up for air.)

    One thing I don’t understand is why these iteration speeds were tolerated. Is it because the company had a huge number of new graduates who didn’t know better? Maybe the average Dropbox engineer can handle context switches a lot better than I can? Perhaps the situation was better outside of core product engineering? Maybe Dropbox grew so fast that the situation regressed faster than people could fix it? All of the above?

    Greenfield

    Shortly after I joined, the Sonoma team failed. We cancelled the prototype and half the team quit. (Retention was a general issue - I’ll talk more about that later.)

    However, executives decided the initiative was still valuable and needed a fresh look. We rebooted the team with a few people from the old team, some internal hires, an acquisition, and some interns to attack the same problem space.

    To avoid our previous iteration issues, we decoupled from the main Dropbox stack and built our own.

    The new stack was a TypeScript + React + Flux + Webpack frontend deployed directly onto S3 and a Go backend. None of us actually wanted to use Go, but it had momentum at Dropbox and many existing Dropbox systems already had Go APIs. Our iteration speed was great. We could deploy whenever we wanted and were only limited by our ability to think, as it should be.

    This new team, by the way, was the most gelled team I’ve ever worked with. Not only was everyone productive and thoughtful, but our personalities meshed in a way that coming into the office was a pleasure. And talk about cross-functional! It was as if everyone on the team was multiclassing. Some of the engineers on the team would easily have qualified as product managers at IMVU, and our (excellent!) designers regularly wrote code. The level of empathy and thoughtfulness they had about the customer’s emotional state surpassed anything I’d seen.

    I hope someday to work on a team like that again!

    And, in a form only conceptually related to any code I wrote, our project did eventually ship!

    Personal Life Fell Apart

    Shortly after our product team rebooted itself and was in its Sprint Zero, my personal life began to unravel.

    When I joined Dropbox, I knew the 90-minute commute from the south bay to San Francisco would be painful. And my wife was pregnant with our third child, so I’d be taking paternity leave only months after starting. But I did not predict how awful that year would be.

    Days before #3 was born, my beloved grandfather suddenly collapsed and died at 74 years old. One month later, my 54-year-old father was diagnosed with stage 4 lung cancer, and was (accurately) given nine months to live. That fall, my wife’s grandmother passed. I went from being devastated to numb.

    It was easily the worst year of my life, but I could not have asked for Dropbox to provide better support. Even though I’d just joined the company, they let me take all the time I needed to get my personal life back in order.

    Including paternity leave, I took months of paid time off. And when my father passed, my manager gave me all the time I needed to help my mother get her estate in order, and then gradually ease back into a work mindset.

    I’ll forever be grateful for the support Dropbox provided in the worst year of my life.

    Empathy

    You’d think, coming from IMVU – a company built around avatars and self expression – that a B2B file syncing company would be dry and uninteresting.

    But I was thrilled to discover Dropbox implemented all the practices I only dreamed of at IMVU. My team regularly flew around the country to meet with users, deeply understanding their workflows and bringing that knowledge back.

    It’s a common simplification to say that Dropbox is a file-syncing business. But file syncing is a commodity. Dropbox’s value is broader - it’s more accurate to think of Dropbox as an organizing-and-sharing-your-stuff business. This mindset leads to features like the Dropbox Badge and how taking screenshots automatically uploads and copies the URL to your clipboard, because most of the time screenshots are going to be shared.

    Employee Retention

    I’d heard about high rates of employee churn in mainstream Silicon Valley but IMVU had unusually high retention so I didn’t witness it until Dropbox.

    Maybe it’s a San Francisco culture thing. Maybe the employee base is young and not tied down. Maybe I joined right as the valuation peaked and people wanted to cash out. Either way, employee turnover was high. It felt like half the people I met would quit a couple months later. I can understand - if you’re in your 20s and sitting on a few million bucks, why not just go live in a cabin on a lake or spend a year climbing mountains.

    It’s hard for companies in Silicon Valley to keep employees - there are so many opportunities and salaries are so high that even new grads can work on almost any project they personally find fulfilling. I suspect the majority of engineers in SV could quit, walk down the street, and get a raise somewhere else with almost no effort.

    That said, longevity is important. If your goal is to become a senior engineer capable of large-scale impact, you have to be able to see the results of decisions you made years prior. If you jump ship every 18 months, you won’t develop that skill.

    I don’t have data on Dropbox’s retention numbers, but anecdotally it felt like they struggled to keep employees, especially senior engineers. I feel like they’d be well-served by doing a win-loss analysis on every regrettable departure, and solving that, even if throwing money at the problem is a short-term fix.

    No matter the reason, it’s a worrying sign when so many of the good people leave. Companies need to retain their senior talent.

    Programming Languages

    I firmly believe that programming languages matter. They shape your thoughts and strongly influence code’s correctness, runtime behavior, and even team dynamics.

    I’ve always been a fan of Python – we used it heavily at IMVU – but I learned something surprising: what’s worse than IMVU’s two million lines of backend PHP? Two million lines of Python! PHP is a shit language, for sure, but Python has all of the same dynamic language problems, plus high startup costs, plus so much dynamicism and sugar that people feel compelled to build fancy abstractions like decorators, proxy objects, and other forms of cute magic. In PHP, for example, you’d never even imagine a framework inspecting a function’s arguments’ names to determine which values should be passed in. Yet that’s how our Python code at Dropbox worked. PHP’s expressive weakness leads to obvious, straight-line code, which is a strength at scale.

    There was a migration away from Python and towards Go on the backend while I was there. Go is a great language for writing focused network services, but I’m not sure replacing all the complicated business logic with Go would be successful. My experience on our little product team writing an application in Go is that it’s way too easy for someone to introduce a data race on a shared data structure or to implement, for example, the “timeout idiom” incorrectly, leaking goroutines. And no matter how many people say otherwise, the lack of generics really does hurt.

    When I joined the company, the Dropbox.com frontend was written in CoffeeScript. I’ve already written my feelings on that language, and the Dropbox experience didn’t change them. Fortunately, while I was there, the web platform team managed a well-motivated, well-communicated, and well-executed transition to TypeScript. Major props to them - large-scale technology transitions are hard in the best of times, and they did a great job making TypeScript happen.

    Compensation

    I never felt underpaid at IMVU. I joined before the series A and was given a very fair percentage of the company, especially for a new graduate. Sure, my starting salary was pathetic, but it climbed rapidly once we took funding, and my numbers were at the top end of glassdoor’s numbers for engineers. And relative to other startups or mid-sized private companies, I probably wasn’t underpaid.

    But Dropbox competes with the FAANGs for talent and I had yet to realize just how high top-of-market rate for senior engineers had climbed. Also, levels.fyi wasn’t a thing, and since I was poached rather than looking around, I failed to acquire competing offers. So I didn’t know my market worth.

    Now, by any reasonable person’s standards, my Dropbox offer was good. They matched my IMVU salary and gave me an equivalent amount in RSU equity per year (for four years), plus another 20% of my salary as a signing bonus. That should be great, and I was happy with the offer.

    But, in hindsight, I could have gotten the same offer from publicly traded FAANGs, and since Dropbox was private (and possibly overvalued), I should have fought for a 2x equity multiplier at least.

    This would have forced the company to place me into a more impactful role and level, ultimately making my work more satisfying, and perhaps keeping me at the company longer. As it was, I was underleveled.

    Always try to get competing offers. Had I done that, and had Dropbox still been the #1 choice, I might still be there, with both sides happier.

    Credibility

    When I joined IMVU, it was a small team of founders. Eric Ries said to me “For your first week here, you should fix one thing for each person.” This was great advice. Building rapport early is important.

    I’d forgotten that advice by the time I joined Dropbox. I had been in a position of implicit credibility for so long that I assumed it would carry over. It was a splash of cold water to realize nobody cares what you’ve done. Nobody cares about what other people have thought of you. As a new hire, you’re an unknown like everyone else.

    It’s common for people joining from other companies to talk about their experiences. “At Acme Corp, we did this.” But when nobody has any shared context with your time at Acme, that sentence conveys no meaning.

    I spent too much time saying “At IMVU, we did X.” To me, IMVU was a great place that did a lot well. But talking about it wasn’t helpful. Eventually, I learned to rephrase. “I’ve noticed we have X problem. What do you think about trying Y solution?” Nobody cares how you learned the trick, but you’re a wizard if you perform it in front of them.

    Security

    Dropbox cares a lot about security. They’re fully aware that breaches destroy trust, and Dropbox greatly values its customers’ trust.

    As a customer, I’m very happy to know a world-class security team is protecting my data. As a developer, it sometimes was a pain in the ass. :) Culturally, they rounded towards more secure by default, even if it negatively affected velocity. My team had to sign some Windows executables, and because we didn’t have an internal service that handled apps other than the main Dropbox app, I had to be escorted to the vault, where a signing key was briefly plugged into my laptop, and all uses were supervised.

    And the developer laptops were sooo slow. I don’t know how you take a brand new MacBook Pro and turn it into something so sluggish. The week when the virus scanner was broken was glorious. I believe all activity on our machines was monitored too. “You should have zero expectation of privacy on your work hardware.”

    Development gripes aside, as a user, I trust Dropbox to protect my data.

    Diversity and Interviewing

    Dropbox paid a lot of attention to employee diversity. Everyone was required to take unconscious bias training, and the interview process aimed to limit the risk of race or gender or even cultural background clouding a hire decision. For example, it’s common for interviewers to factor in the presence of a GitHub profile as positive signal, but the Dropbox interviewing process cautioned against this, as it biases towards people of a particular background.

    To that end, the interview process for engineers was mechanized. The primary input on your hiring decision was how well you could write correct, complicated, algorithms-and-data-structures code on a given set of whiteboard questions, where questions involving concurrency were considered especially valuable. This process, while attempting to be bias-free, had unintended effects. It resulted in a heavy bias towards new grads from high-end computer science programs, such as the Ivy Leagues.

    And even with the briefest glance around the office, you could tell the employees weren’t a representative slice of society. The company was full of pretty people. I’m sure being headquartered in San Francisco and the relative youth of the employees had an effect. I’m at Facebook now, and it feels a lot more like a normal slice of society. Or at least of suburban Silican Valley.

    I referred two of IMVU’s best engineers – the kinds of people who have average CS backgrounds, but who have shipped a ton of high-value code and led major projects. One didn’t make it through the screen, and the recruiter wrote the note “Declined: we don’t have much signal about IMVU” and the other failed the interview because they didn’t use a heap to solve a certain problem. The moderator in the debrief told the (junior) interviewer “Now, now, a senior industry product candidate probably hasn’t used a heap in 10 years”, but the result remained the same.

    I appreciate Dropbox’s attempts to create a bias-free interview process, but I worry that it values fresh CS knowledge over experience and get-it-done attitude.

    By the way, when the FAANGs and Dropbox are offering compensation packages twice what startups can afford, this is where startups can compete for talent. There are many people who didn’t graduate college but are focused workers or have a knack for understanding users.

    Creative Talent

    The sheer density of creative talent at Dropbox was amazing. Designers and product managers could hack on the code. Product engineers had an amazing sense of empathy for the customer. There was art on the walls and various creative projects all around the office. It seemed like everyone I met had multiple talents.

    Even the interns were amazing. These kids, barely old enough to drink, had a strong grasp on cryptography and networks and distributed systems and programming language theory, on top of all of the basic CS knowledge. Motivated individuals have access to so much more information than when I grew up, and I’m a bit jealous. :)

    Hack Week

    IMVU began company-wide hack weeks in 2007. Eric Prestemon, one of our tech leads, modeled the idea off of Yahoo’s hack days, but as far as I’ve heard, we might have been the first to make it a week-long quarterly event. (I look forward to hearing from you if your company also ran a hack week.) So when I joined Dropbox, the idea was quite familiar, and the benefits obvious.

    The idea is that, on some regular cadence, you give everyone in the company an entire week to work on whatever they want. The normal backlog is paused, product managers have no direct influence, and shipping to 1% of customers is encouraged. It’s good for the business – risky product ideas can be prototyped, some of which become valued parts of the product. And it’s good for employees – everyone gets a chance to drive what they think is important and underserved. Hack weeks inject a dose of positivity into the work environment.

    But I must say that Dropbox ran its hack week better than IMVU ever did. IMVU’s hack week started off open-ended. As long as it was somewhat related to the business, employees could work on anything. But over time, the product managers put an increasing amount of pressure on people to work on their projects and deliver concrete value that week.

    Dropbox, on the other hand, invested substantially more organizational effort into supporting hack week. The dates were announced in advance, giving people time to write up proposals, merge project ideas, and form small teams to work on them.

    While most people applied their creative energy to unexplored product ideas, there was no pressure to do any particular thing. At Dropbox, it was totally cool to spend hack week learning a new skill, blowing glass, or trying to break a Guinness record. There’s nothing like being surrounded by excited people. Passion is contagious.

    Projects were celebrated during Friday’s expo, where the office was arranged into zones and each zone given a time window for presentations. Then, everyone, including executives, would tour the projects. The most impactful or promising would get a chance to be officially funded. I can’t put into words how amazing some of the projects were. Dropbox hack week was like getting a glimpse into what the future of business collaboration will look like. Of course, it takes time to ship features properly, but these weren’t smoke and mirrors demos. Many projects actually had their core loops implemented.

    Community

    If you care about volunteering your time and giving back to the community, Dropbox is a great place to work. Every quarter, you could take two paid days off to volunteer your time. For example, you could work in a local school or a food pantry. Monetary donations to charity were also matched one-to-one up to a cap.

    Charity and service opportunities were regularly by email. Public service was celebrated and part of the culture.

    Code Review

    Dropbox follows a diff-based code review process using Phabricator. I think it was largely copied from Facebook’s. I’ve written before that I don’t think diff-based code reviews are as effective or efficient as project-based, dataflow-based code review.

    And as I expected, the code review process at Dropbox lent itself to bikeshedding. To be fair, code review processes are cultural, so I imagine diff-based review could work well.

    Nonetheless, it was common for me to have diffs blocked for minor things. My diffs were rejected for things like use of tense or capitalization in comments. Meanwhile, important decisions like why I chose a certain hash function or system design would receive no comments at all.

    Also! A common antipattern was for someone to block my diff because, even though it was an improvement over the previous state, it didn’t go far enough. This unnecessary perfectionism slowed progress towards the desired end state.

    The net result was a lot of friction in the development process. At IMVU, we followed a project-structured flow, where a team planned out a body of work, had an informal design review (emphasizing core components over leaf components), implemented the feature with autonomy, and finally, as the project wrapped up, one or more hour-long project-based code review sessions were held. This made sure we got the high-order bits right, while letting the team move quickly during development.

    In contrast, at Dropbox, code review was interleaved throughout development. Coupled with the fact that everyone was busy and team members had different schedules, the turnaround time on diffs was measured in hours or days. In egregious cases, a small diff might have one code review cycle per day, and get bounced back multiple times, resulting in a three-day latency between work starting and the diff landing on master.

    This meant I rarely entered a flow state. I had to keep a handful of diffs in flight at all time which was tremendously inefficient, at least for me – I am not great at context switches.

    [UPDATE: I wrote the above before joining Facebook, and Facebook’s diff-based code review process is much healthier. It’s a combination of culture and tooling. Maybe I’ll write about that sometime.]

    Testing

    My understanding is that Dropbox didn’t form a testing culture until years after the company started. The result is that the basic processes of effective testing at scale were still being figured out. Coming from IMVU, it felt like stepping back in time about five years. (An aside, this made me realize that company maturity is an orthogonal axis to revenue and size.)

    Testing maturity is a progression.

    1. No tests
    2. Occasional, slow, unreliable tests
    3. Semi-comprehensive integration tests
    4. Fast, comprehensive unit tests comprise the bulk of testing
      1. Dependency injection
      2. Composable subsystem design
    5. Real-time test feedback (ideally integrated into the editor)
    6. Tests are extremely reliable or guaranteed reliable by the type system
      1. With tooling that tracks the reliability of tests and provides that feedback to authors.
    7. Fuzzing, statistically automated microbenchmarking, rich testing frameworks for every language and every platform, and a company culture of writing the appropriate number of unit tests and high-value integration tests.

    If IMVU was somewhere around 5 or 6, Dropbox, when I joined, was closer to 3. The situation improved while I was there, but this stuff takes time. And good ideas spread more slowly if you have more junior engineers. Also, every flaky test written – or integration test that could have been a unit test – is a recurring cost on future engineering, so it’s valuable to climb this hierarchy early in an engineering team’s life.

    All of that said, the company did important work on this axis while I was there. They’ll probably catch up eventually.

    EPD

    At IMVU, between 2010-ish and 2015, there was a strong divide between product management, design, and engineering roles. IMVU’s executive leadership was a proponent of “The person who makes the decision must be responsible, and since product management must be held responsible, they must also have full control over their decisions.” The implication is that product management has total say, and engineers must do what they’re told. As you might imagine, people don’t like being told their opinion doesn’t matter, which led to conflict and unhealthy team dynamics.

    I personally favor a soft touch product management style, where product management gathers data, shares context with the team, and guides it to success. (See this excellent interview with Bob Corrigan.) I understand it’s harder to judge the success of a PM than with a black-and-white “was your product successful”, but top-down tactical team management is not healthy, and IMVU was frequently guilty of that.

    Thus, when I went to Dropbox, I was thrilled to see that engineering, product management, and design were considered one unit. The teams I was involved with had frequent open communication between all team members, and I did not observe any disagreements that weren’t quickly resolved by sharing additional context. (Though there were a couple times that context led to people quitting or switching teams, haha. But better that than misery.)

    Now, product management still owned the backlog. Unlike Facebook, engineers did not have true autonomy. But the dynamics were so much healthier than IMVU’s. I’ll grant it’s possible that healthy dynamics are easier when revenue and the stock price are growing.

    The Food

    The food at Dropbox is unreal. On my first day, I had the best fried chicken sandwich of my life. For lunch, it was not uncommon to have to decide between duck confit, swordfish, and braised lamb shank for lunch. I thought “There’s no way this lasts.” But… it did, at least as long as I was there. The quality did briefly dip a bit as the company moved to a bigger office (with a different kitchen) and expanded the food service to all of the new employees, but it recovered.

    Dropbox’s kitchen – known as The Tuck Shop – never made the same dish twice. (Though common themes would pop up every so often.) At first, this made me sad. With an amazing dish would come knowledge that I would never have it again. This was hard to bear. There were no recipes; the dishes came straight from the minds of Chef Brian and his team. Eventually, the Tuck Shop gave me a kind of zen allegory for life. In life, moments are fleeting and you don’t get redos, so enjoy opportunities when they occur.

    I had a very long commute from the South Bay up to the city, so I ate a large breakfast every day. Breakfast is what made the long commute possible.

    Grilled cheese with tomato soup and sweet potato tots
    Breakfast was killer. Check out this grilled cheese with tomato soup and sweet potato tots.

    Oh yeah! It took me too long to learn about this, but the Tuck Shop hosted afternoon tea and cookies or cake. Afternoon desserts you’ve never heard of. The coffee shop made crepes. Fresh young coconuts every day. And even wine pairing on Wednesday nights.

    Wine pairing with cocktail, scallop, and pasta
    Wednesday night wine pairing, with cocktail, scallop, and pasta.

    It’s funny to hear people at other companies talk about how good the food is. (And I’m sure it is good!) But no way does Dropbox not have the best corporate food in Silicon Valley, or even the whole USA.

    Will it last? Hard to say. Is it egregious? It certainly feels like it, but the cost of food is dominated by labor and I’ve heard they’ve managed to get costs to a reasonable amount per day, with almost zero food wastage. Hundreds of restaurant-style entrees were prepared and plated on masse, with copious use of sous vide and diced herbs sprinkled on top. I’m sure food costs were still a drop in the bucket compared to the salaries of thousands of engineers.

    Floss

    This might sound silly, but one of my favorite benefits was the fact that every bathroom was stocked with floss and other oral hygiene items. I’ve always at least tried to make an attempt to floss regularly, but when it’s right there at work, flossing regularly is so much easier. It was especially important after those amazing lunches.

    Wrong Career Trajectory

    About a year into my employment, the honeymoon was over. While I was enjoying my work and the team, the impact I was having on the company was tiny compared to my potential. For one, I was on a product prototyping team, isolated from the main Dropbox offering. Like a startup, the only way for a new effort to have significant impact is by succeeding. And the chances of that are small. While I really enjoy pushing pixels and executing that tight customer-feedback-write-code loop, I wasn’t going to get promoted doing that. In fact, I know a bunch of cheaper engineers than me who write better CSS.

    In hindsight, I probably should have joined an infrastructure team from the beginning. Engineering infrastructure has a lot of visible impact, requires deeper technical leadership than product work, and aligns better with my skillset. That said, I intentionally joined Dropbox to learn about its product culture. I’d also had little exposure to mainstream web stacks (IMVU hand-rolled its own mostly due to unfortunate timing), and no exposure to Electron, iOS, and native macOS development. Plus, again, pushing pixels with world-class collaborative designers and a gelled team is delightful. :)

    I’m conflicted, but I can’t say I regret my time on the prototyping team. The friendships alone were worth it, and I can justify it like getting paid to go back to school after 10 years of IMVU technology and habits.

    Nonetheless, I was unhappy, so I made the transition over to the dropbox.com performance team.

    Web Performance

    Web performance and platform APIs are squarely in my wheelhouse. I led the team that transitioned IMVU from desktop client engineers to web frontend engineers and, with my team, built most of IMVU’s modern web stack (and to this day, a lot of what we did remains better than off-the-shelf open source).

    So when Dropbox kicked off a strike team to reduce its page load times from eight seconds (!) to four, I decided that was a great opportunity, and switched teams.

    This proved to be quite challenging for me. Coming from a prototyping team with its own stack, I had little background on how the core Dropbox.com stack worked. Meanwhile, my new teammates had years of experience with it. And because the effort was a fire drill, it never felt like there was enough time to sit down and properly spin up.

    The big lesson for me here was about setting expectations. While I had a lot of experience in the space, I did a poor job of making it clear that I’d need time to spin up on the team. As a new but senior engineer, I could have managed the dynamics much better. I would have done better with more independence, autonomy, and time.

    Management Churn

    The web platform team also had a lot of management churn. I liked my first manager quite a lot, but we both knew that everything would change once our performance goals were achieved. The company planned to hire a new group manager, who would then hire his own managers for each team.

    In the short term, I reported to the new group manager, but he didn’t last long at the company. So then I reported to the team’s tech lead, who wasn’t planning on being a manager again, but had to.

    The result of all of this was that I had seven managers in two years. I’d heard about Silicon Valley’s high rate employee turnover, but with so many managers it was hard to build rapport and learn each other’s styles.

    It didn’t help that my new manager’s style was very command-and-control. After our team planning, he laid out my next several months’ worth of work. Given my need to work with autonomy, this was probably the beginning of the end of my time at Dropbox.

    Note that I liked everyone I worked with. There just wasn’t enough time or space to build a strong working relationship.

    Impact

    I’d been at Dropbox for about 20 months before I learned how engineers are leveled. The problem boils down to this: given an organization of a thousand or more, how do you decide how to distribute responsibilities and determine compensation? You’d like to maintain some kind of fairness across disciplines, organizations, teams, and managers. In an ideal world, your team and manager are independent of your compensation.

    Dropbox culture derived largely from Facebook (as many early employees had come from Facebook), and Facebook determines level and compensation by gauging each employee’s impact. During review season, managers from across the company are all shuffled into groups that review random engineers. This prevents a manager from biasing their reports upwards or downwards and adds consistency. This calibration process focuses primarily on a sense of that person’s impact on the company.

    At Facebook, impact is a first-class concept. It’s common to hear “I have some impact for you.” But I’d come from a 150-person company where the pay bands were wide, people did a mix of short-term and long-term work, and managers were primarily responsible for placing their employees in said pay bands based on a variety of factors, such as team cohesiveness, giving high-quality feedback to peers, and writing quality code. IMVU was a team culture.

    Dropbox had copied their system from Facebook. Now, I don’t think IMVU’s system was better, and I don’t think Dropbox’s was bad. Here’s the problem: nobody ever told me how it worked.

    If I had known how my level and compensation were determined, I would have made very different decisions. Facebook, on the other hand, has a class during orientation that explains how your compensation and level are determined. They have countless wiki pages, internal posts, and presentations on the subject. The incentives are very explicit.

    I was too naive and trusting and assumed everything would just work out if I was a good teammate and worked hard. I now see the value in grasping the mechanics of the incentive structure early.

    Obviously you can find problems with any incentive system, but the real issue here is that I somehow never learned that Dropbox’s leveling process required you to keep track of your specific contributions, especially the more nebulous ones that don’t bubble up through typical project management, and provide enough data that your manager can make the case for a level adjustment in the company-wide calibration process.

    I was months away from quitting when I found out how this worked, and suddenly it explained why my manager had always wanted specific examples of work I’d done. At the time, it had seemed like whether I’d done such and such optimization was irrelevant on an the annual review. I could have done a much better job of framing my contributions.

    Thinking from First Principles

    I (and several of the early IMVU engineers) are partial to first-principles thinking. What are we trying to do? What are the constraints? What are the options? How do they weigh against the constraints? Decide accordingly. Perhaps this comes from working on games where, at least in the early years, technical constraints had a big influence on the options. Facebook also seems to have a first principles culture.

    But one thing I found frustrating at Dropbox is a kind of… software architecture as religion. “No, don’t structure your code that way, it’s not very Fluxy.” or “[Facebook or Google] do it this way, we should too.” or “We need to compute this data lazily for performance [even though a brute force solution under realistic data sizes can easily achieve our concrete performance targets].” or arguments like “We should base our build system on webpack because it seems to be the winner [without measuring its suitability for the problem at hand]”.

    Again, I’m conflicted. There’s value in going with the flow and not spending forever shaving all the yaks if something already fits the bill. But often, with a bit of research and some thought, you can come up with a better solution. In hindsight, I’m amazed that IMVU was able to build such great, reliable, and fast infrastructure with a small set of talented people, simply by thinking about the problem from the top and precisely solving it. For example, IMVU’s realtime message queue, IMQ, was better than any of the four that Dropbox had built, and was written by three people.

    (Now is a good time to remind you that I was only exposed to a small slice of Dropbox. I would hope that, for example, the data storage teams thought from first principles.)

    Lessons Learned

    In a lot of ways, Dropbox was a great place to work. I loved my teammates and learned a lot. Even though it ended up not working out, it’s hard to regret the time I spent there.

    Here are some lessons I took away:

    • Always get competing offers. Know your worth.
    • When you’re hired, take the time to understand how you’re judged. This would have prevented a lot of confusion on my part.
    • 90-minute door-to-door commutes are horrible.
    • Unless you’re Guido van Rossum or otherwise have a widespread reputation, building credibility is hard and takes conscious effort for months and possibly years.
    • As fantastic as the food perks are, meaningful work is better.
    • Thank you so much, Dropbox, for taking care of me and my family during a very hard year.