Recent Posts

Unsafe Rust Is Harder Than C Terminal Latency on Windows My Minimal tmux Config I Just Wanted Emacs to Look Nice — Using 24-Bit Color in Terminals Reference Counting Things Microsoft Sculpt Wired Conversion Mod Measuring and Coping with Indoor CO2
  • Unsafe Rust Is Harder Than C

    Or: The Most Expensive Linked List I’ve Ever Written

    Some of you already know the contents of this post, especially if you’ve written embedded or unsafe code in Rust. But I didn’t, so I thought it was useful to write down my experience as accurately as I can. Without further ado…

    Last year, I wrote Photohash, software to help me index my NAS and find duplicate photos with rotation-independent hashing and perceptual hashing. To make use of cores and keep the disks busy, it distributes work to compute and IO workers. Work is distributed with channels – synchronized work queues.

    In Photohash, work tends to be discovered and processed in batches: enumerating directories returns multiple entries and the database is updated in multi-row transactions.

    Rust has a rich selection of channel implementations: std::sync::mpsc, futures::channel, tokio::sync, crossbeam::channel, flume, and kanal are high-quality options.

    Unfortunately, none of them exactly met my needs, so I nerd-sniped myself into writing my dream channel. My previous day job (EdenFS and Watchman) was full of ad-hoc channels so I knew roughly I wanted. kanal is closest, but it is riddled with unsafe code and uses spinlocks which look great in microbenchmarks but have no place in userspace software.

    Introducing batch-channel, a throughput-optimized channel. The design goals are:

    • Multi-producer, multi-consumer. Parallelism in both production and consumption.
    • Sync and async support for both consumers and producers. Mix-and-match provides flexibility for use in any type of thread pool, async runtime, or FFI.
    • Bounded or unbounded. Bounded for backpressure and limiting peak memory consumption. Unbounded for situations where you cannot guarantee deadlock freedom.
    • Sending and receiving multiple values. I often want to send multiple values. Like reading all of the paths in a directory. Or writing multiple rows to a database. Batching allows amortizing per-batch costs. It’s silly to acquire the channel lock N times to push N values. It’s the same on the consumption side: workers may want to pull all pending work items in one channel read. You might wonder about lock-free queues, and they have their place, but but you’ll still contend on the head and tail, and atomic operations remain slow even on modern Intel cores. If you’re going to contend on the queue anyway, batch-channel’s philosophy is to stick the whole thing behind a mutex and maximize batch sizes on both ends.

    At the time this was written, the following design goals weren’t yet implemented:

    • Priorities. Senders can influence processing order.
    • Bounding variable-sized items. For example, being able to say a queue can hold up to 20 MB of paths, no matter how long they are.

    And, finally, the design goal that led to this post:

    • No allocations under steady-state use. Allocations are a source of contention, failure, and overhead, especially when using slow system allocators.

    The Shape of a Channel

    To explain why unsafe Rust is even involved, let’s look at the implementation of a channel.

    pub struct Channel<T> {
      q: VecDeque<T>,
      // Blocked receivers
      waiting_for_elements: Vec<Waker>,
      // Blocked senders
      waiting_for_capacity: Vec<Waker>,
      // Synchronous use requires some condition variables too.
    }
    

    When an async receiver blocks on recv() because the channel is empty, the task’s Waker is stored so that the channel knows to wake it when a value arrives.

    Waker is a handle to a blocked task. The channel can signal the async runtime to wake a task when it should poll the channel again. It’s a wide pointer, two words in size.

    Waker is used like this:

    pub struct Recv<'a, T> {
      channel: &'a Channel<T>,
    }
    
    impl<'a, T> Future for Recv<'a, T> {
      type Output = T;
      
      fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        // Is the queue empty?
        if let Some(element) = self.channel.q.pop_front() {
          // The queue has an element, so return it.
          Poll::Ready(element)
        } else {
          // Queue is empty so block and try again later.
          self.channel.waiting_for_elements.push(cx.waker().clone());
          Poll::Pending
        }
      }
    }
    

    Note: The above code is illustrative. In reality, the channel has a Mutex and some condition variables, but that’s incidental for this post.

    If the queue is empty when recv() is called, the waker is stored in the channel and the task enters the blocked state.

    Later, when a value is added to the queue, any waiting tasks are woken:

    fn send<T>(channel: &mut Channel<T>, value: T) {
      channel.q.push_back(value);
      let wakers = mem::take(&mut channel.waiting_for_elements);
      // NOTE: Mutexes are released here, before waking.
      // Unless we are clever with cancellation, we have to wake all futures,
      // because we don't know which, if any, will attempt the next poll.
      for waker in wakers {
        waker.wake();
      }
    }
    

    Here’s the issue: waiting_for_elements is a Vec<Waker>. The channel cannot know how many tasks are blocked, so we can’t use a fixed-size array. Using a Vec means we allocate memory every time we queue a waker. And that allocation is taken and released every time we have to wake.

    The result is that a naive implementation will allocate and free repeatedly under steady-state send and recv. That’s a lot of memory allocator traffic.

    Can We Use an Intrusive List?

    The optimization I want is, rather than allocating in a Vec every time a task is blocked on the queue, can we store the list of Wakers within the blocked futures themselves? We know that we only need to store as many Wakers as blocked futures, so that should work.

    It should look something like:

    pub struct Channel<T> {
      q: VecDeque<T>,
      // Intrusive doubly-linked list head.
      waiting_for_elements: WakerList,
    }
    
    fn send<T>(channel: &Channel<T>, value: T) {
      channel.q.push_back(value);
      let wakers = channel.waiting_for_elements.extract_list();
      // Release any mutex before waking.
      for waker in wakers {
        waker.wake();
      }
    }
    
    pub struct Recv<'a, T> {
      channel: &'a Channel<T>,
      // Every Future gets a WakerSlot, which is an intrusive doubly-linked
      // list node protected by Channel's mutex.
      waker: WakerSlot,
    }
    
    impl<'a, T> Future for Recv<'a, T> {
      type Output = T;
      
      fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        if let Some(element) = self.channel.q.pop_front() {
          Poll::Ready(element)
        } else {
          // Queue is empty so try again later.
          // Store the Waker in this Future by linking it into channel's list.
          self.channel.waiting_for_elements.link(&mut self.waker, cx.waker().clone());
          Poll::Pending
        }
      }
    }
    

    And that’s about the limit of my fake illustrative code. It’s time to get into details. How do we express an intrusive linked list in Rust?

    Intrusive List Crates

    This isn’t a new idea. I looked for existing crates:

    • intrusive-collections is popular, but list nodes must outlive the list itself. In my case, the future will never outlive the channel.
    • futures-intrusive is a nice-looking crate that performs the same optimization, but does not meet my design goals.
    • multilist is Patrick Walton’s pre-1.0 experiment. Interesting idea, but it allocates nodes on the heap.

    There are two other production examples of this approach:

    • lilos-list is part of Cliff Biffle’s embedded operating system lilos. It stores wakers with an intrusive list. It’s close to what I wanted, but embedded OS code tends to have its own concurrency model. In particular, it would take some work to integrate it with standard library mutexes, and it calls wakers while locks are held, which is a bad idea. On the other hand, since there are no threads in lilos, it can avoid implementing Send and use std::cell::Cell to temporarily perform mutations on otherwise shared references.
    • tokio stores its channel wakers in an intrusive linked list too. Its implementation has a surprising amount of code, but it’s closest to what I want. The point is moot: it’s an implementation detail not visible outside of Tokio. (See Alice Rhyl’s musings on the challenges of intrusive structures in Rust.)

    Pinning

    It’s time to write a crate.

    I want the channel to store a WakerList and each future to have a WakerSlot member. Slots can be linked into the list and unlinked either on wake or future cancellation.

    WakerList and WakerSlot form a self-referential data structure. Self-referential data structures are a well-known challenge in Rust. They require unsafe code.

    In C++, this is relatively easy. You delete the move and copy operations and fix up the link pointers as appropriate.

    So, at this point in my Rust journey, still thinking in C++, I assume “easy!”

    I just need to disable movement with !Unpin (actually PhantomPinned) and ensure all methods take Pin<&mut WakerList> and Pin<&mut WakerSlot>.

    Once you observe a Pin<&mut T>, you can assume T will never move again. I’m not going to discuss Pin in depth – Jon Gjengset has an excellent video describing its rationale and usage.

    Here’s where things start to get hard. Pinning was added well after Rust 1.0 was stabilized. The language pervasively assumes values of any type can be moved with a memcpy, so writing a data structure that violates that assumption makes the public APIs themselves awkward.

    Here’s what I tried first:

    struct WakerSlot(...);
    struct WakerList(...);
    
    impl WakerList {
      fn link<'list : 'slot, 'slot>(
        self: Pin<&'list mut WakerList>,
        slot: Pin<&'slot mut WakerSlot>,
      )
    }
    

    My thought was that the act of linking should constrain the “pinnedness” and the lifetimes: list must outlive slot. Alas, lifetimes don’t work like that. A function call cannot constrain the actual lifetimes of its parameters. The borrow checker will happily subset 'list and 'slot until it proves whether it can satisfy the constraints. The result is that the link’s definition above has no effect.

    The idea that lifetimes precisely match the lives of actual values is apparently a common misconception, and it resulted in some puzzling error messages.

    (Writing a post like this after the fact feels weird because “of course it doesn’t work that way” but I’m faithfully documenting my learning process.)

    Can WakerSlot itself take a lifetime to the list it references?

    struct WakerList(...);
    struct WakerSlot<'list>(...);
    
    impl WakerSlot<'_> {
      fn new(list: &WakerList) -> WakerSlot<'_>;
    }
    

    This doesn’t work. If you pretend the WakerSlot has a reference to the WakerList, then you can never create a &mut WakerList elsewhere, because Rust’s core lifetime rule is that you can either have one mut reference or many shared references but never both.

    I’m hoping this is possible and a reader leaves me a note.

    When Panicking Isn’t Safe Enough

    Conceptually, link and unlink operations take mutable references to both the list and slot. But I never found a way to satisfy all of the rules in the type system:

    • WakerSlot’s lifetime parameter does not outlive its list.
    • Only one &mut reference at a time.
    • Never &mut and & simultaneously.

    Here, I gave up on trying to express the rules in the type system and chose to assert at runtime. The runtime lifetime rules are:

    WakerList must be empty when dropped. Otherwise, slots would have pointers to invalid memory.

    WakerSlot must be unlinked when dropped. Otherwise, the list references deallocated memory.

    Reporting these invariant violations with panic is not sufficient. Panics can be caught, but the program would remain in a state where safe code can access dangling pointers and cause undefined behavior (UB).

    Therefore, when an invariant is violated, the program must abort.

    But I can’t just call abort: I want this utility crate to be [no_std], so it’s up to the calling program to decide how it aborts.

    The simplest solution I found was to panic from an extern "C" function and let Rust translate that to an abort.

    #[allow(non_snake_case)]
    #[inline(never)]
    #[cold]
    extern "C" fn MUST_UNLINK_WakerSlot_BEFORE_DROP() -> ! {
        // panic! from extern "C" is an abort with an error message.
        panic!("Must unlink WakerSlot before drop")
        // Another option, at the cost of a tiny, stable, dependency, is
        // the `abort` crate.
        //abort::abort()
    }
    

    Structural Pinning

    I elided this detail in the example code above, but WakerList is intended to be accessed behind a mutex. However, neither std::sync::Mutex nor parking_lot::Mutex have structural pinning. That is, lock() is &Mutex<T> onto &mut T, allowing T to be moved.

    I needed a safe API for getting Pin<&mut T> from Pin<&Mutex<T>>.

    So I wrote the pinned-mutex crate which provides structurally-pinned Mutex, MutexGuard, and Condvar wrappers.

    Note that there is a pinarcmutex crate that offers a PinArcMutex<T> type roughly equivalent to Pin<Arc<Mutex<T>>> except with structural pinning. But it allocates and you can’t drop in parking_lot’s mutex, which is faster and lighter than the standard library’s.

    We can imagine a future Rust version where pinning is more natural and has pervasive (or implicit) standard library support.

    Pinning Ergonomics

    Boats recently wrote a nice overview of why Pin is shaped the way it is and why it is painful to use.

    And the internet is full of threads like “Pin Suffering Continues”.

    If you want to use pinned APIs safely in your own code, you will need to depend on a pin-projection crate like pin-project or pin-project-lite. (And don’t forget pinned_drop!)

    It works fine, but you end up with code that looks like the following.

    let mut wakers = state.as_mut().project().base.project().rx_wakers.extract_some_wakers();
    while wakers.wake_all() {
        let mut state = self.project_ref().state.lock();
        wakers.extract_more(state.as_mut().base().project().rx_wakers);
    }
    

    Writing code this way is miserable. The compiler will guide you, but my mind was shouting “you know what I want, just do it” the whole time.

    It Works!

    Deep breath. Tests passed. WakerList and WakerSlot provide the interface I wanted, so I published them in a wakerset crate. It offers a safe, intrusive, no_std list of Wakers.

    With it, I could remove the main source of steady-state allocations in batch_channel.

    It was time to polish it up and ensure I didn’t miss anything, and this blog post is only half-done.

    Undefined Behavior, Sanitizers, and MIRI

    One of my original goals for batch-channel was to avoid unsafe code.

    Unfortunately, two optimizations required it. Besides the intrusive list described above, the MPMC channel objects themselves are managed with two reference counts, one for senders and one for receivers. A split reference count is required: when one half of the channel is dropped, the channel is closed.

    To simplify auditing, I placed all of this new unsafe code behind safe APIs and separate crates. They are:

    Safe Rust, excepting compiler bugs and incorrectly-designed unsound APIs, has no undefined behavior. Unsafe Rust, on the other hand, removes the guardrails and opens a buffet of possible UB.

    There are three ways you can deal with potential undefined behavior. In increasing order of happiness over time:

    • Hope for the best and deal with potential bugs when they come up.
    • Think carefully and convince yourself the code is correct.
    • Automated sanitizers.

    Fortunately, Rust supports sanitizers that detect various types of undefined behavior. The two most useful are ASan and TSan. C++ programmers are quite familiar with them at this point, and I consider them table stakes for any C or C++ project.

    But Rust has one even better: MIRI. It catches violations of the Rust aliasing model, which is pickier than ASAN. Satisfying MIRI is where I spent most of my time.

    Rust Aliasing Model

    The first time I ran MIRI, it failed:

    test link_and_notify_all ...
    error: Undefined Behavior: trying to retag from <254318> for SharedReadWrite permission at alloc88289[0x10], but that tag does not exist in the borrow stack for this location
    ...
    trying to retag from <254318> for SharedReadWrite permission at alloc88289[0x10], but that tag does not exist in the borrow stack for this location
    ...
    help: this indicates a potential bug in the program: it performed an invalid operation, but the Stacked Borrows rules it violated are still experimental
    help: see https://github.com/rust-lang/unsafe-code-guidelines/blob/master/wip/stacked-borrows.md for further information
    

    Stacked borrows? What’s all this?

    Here, I realized my first mistake: I dove straight into unsafe Rust and should have read more in advance:

    My other mistake was “thinking in C”. Being deeply familiar with C’s semantics wasn’t helpful here. In hindsight, it feels foolish to have assumed Rust’s aliasing model was similar to C’s. In C, usually, having pointers to values carries no meaning. Primarily you reason about pointer dereferencing operations.

    For example, in C, this is perfectly legal if p == q:

    void foo(int* p, const int* q) {
      printf("%d\n", *q);
      *p = 456;
      printf("%d\n", *q); // if p == q, prints 456
      *(int*)q = 789;
      printf("%d\n", *p); // if p == q, prints 789
    }
    

    const is “meaningless” in that it does not prevent the pointee from changing, so the compiler can’t optimize based on it.

    fn foo(a: &u32, b: &mut u32) {
        println!("{a}");
        *b = 123;
        println!("{a}"); // always prints the same value as above
    }
    

    On the other hand, Rust’s primary aliasing rule is that, at any point, an object may have a unique &mut reference to it or any number of shared & references, but never both.

    The optimizer will take advantage of that. a is not reloaded from memory because the write to b cannot alias it.

    The aliasing rules in Rust are not fully defined. That’s part of what makes this hard. You have to write code assuming the most pessimal aliasing model.

    Under the most pessimal aliasing rules, you have to assume taking a &mut reference to a value immediately writes to it and continues to write to it as long as the reference lives. And if you have a shared reference, you have to assume the value is read at arbitrary times as long as any reference is held.

    Box::leak

    Let’s start with the first confusing example I ran into:

    let p = Box::leak(Box::new(MyThing::new())) as *mut MyThing;
    // later:
    let ref: &MyThing = *p;
    ref.method();
    

    MIRI failed, complaining that p was formed from the perpetual &mut returned by Box::leak and therefore it’s UB to create a shared &MyThing reference to it at any point thenceforth.

    The fix was to allocate without forming a reference:

    let p = Box::into_raw(MyThing::new());
    // later:
    let ref: &MyThing = *p;
    ref.method();
    

    Note: This may have been a MIRI bug or the rules have since been relaxed, because I can no longer reproduce as of nightly-2024-06-12. Here’s where the memory model and aliasing rules not being defined caused some pain: when MIRI fails, it’s unclear whether it’s my fault or not. For example, given the &mut was immediately turned into a pointer, does the &mut reference still exist? There are multiple valid interpretations of the rules.

    Box::from_raw

    OK, if you allocate some memory with Box::into_raw, you’d expect to deallocate with Box::from_raw, right? That failed MIRI too.

    I ended up having to write:

    unsafe {
        let ptr = ptr.as_ptr();
        std::ptr::drop_in_place(ptr);
        std::alloc::dealloc(
            ptr as *mut u8,
            std::alloc::Layout::new::<Inner<T>>());
    }
    

    Note: This may have also been a MIRI bug. It is no longer reproducible. I changed splitrc to use Box::into_raw and Box::from_raw and it passes MIRI. I enabled MIRI in my CI so we’ll see if it breaks again going forward.

    Linkage, References, and Interior Mutability

    That’s channel allocation and deallocation covered. Now let’s look at the intrusive pointers in wakerset.

    In a linked list, every node has a linkage struct.

    struct Pointers {
        next: *mut Pointers,
        prev: *mut Pointers,
        pinned: PhantomPinned,
    }
    
    struct WakerList {
        pointers: Pointers,
    }
    
    struct WakerSlot {
        pointers: Pointers,
        // Required to assert that this slot is never
        // unlinked from an unrelated list.
        owner: *mut WakerList,
        waker: Option<Waker>,
    }
    

    Now imagine two threads. Thread A holds a &mut WakerList with the intent to extract pending Wakers. Thread B happens to hold a &WakerSlot at the same time.

    It is UB for code traversing the pointers to form a &mut WakerSlot (or even a &WakerSlot) if any thread might have a &mut WakerSlot, because this violates Rust’s aliasing rules. A &mut reference must always be exclusive, even if it is never dereferenced. This is the important difference with C.

    Because Rust reorders reads and writes based on its aliasing rules, you must never convert a pointer into a reference unless you know that nobody else has a conflicting reference.

    We need to prevent the compiler from optimizing a &WakerSlot into early reads of the pointers and waker fields.

    UnsafeCell is the tool to reach for. It introduces a “mutability barrier”, and UnsafeCell<Pointers> tells Rust not to cache reads. We are responsible for ensuring we won’t violate Rust’s aliasing rules when accessing Pointers’s fields.

    struct WakerList {
        pointers: Pointers,
    }
    
    struct WakerSlot {
        // UnsafeCell: written by WakerList independent of
        // WakerSlot references
        pointers: UnsafeCell<Pointers>,
        // Required to assert that this slot is never
        // unlinked from an unrelated list.
        owner: *mut WakerList,
        waker: Option<Waker>,
    }
    

    A circular linked list means that only a slot reference is required to mutate the list, so I needed to enforce the guarantee that only one thread may access the UnsafeCells at a time.

    I did this with an important, if subtle, API guarantee: all link and unlink operations take &mut WakerList. If &mut WakerSlot was sufficient to unlink, it could violate thread safety if WakerList was behind a mutex. (This also means that WakerList does not require an UnsafeCell<Pointers>.)

    The pinned-aliasable crate solves a related problem: how do we define self-referential data structures with mutable references that do not miscompile? Read the motivation in the crate’s doc comments. It’s a situation required by async futures, which are self-referential and thus pinned, but have no desugaring. See the open Rust Issue #63818.

    Avoiding References Entirely

    As mentioned, when traversing a linked list, it’s easy to form conflicting &mut references to nodes. Consider this slightly contrived unlink example:

    let p = &slot.pointers as *mut Pointers;
    let next: &mut Pointers = *(*p).next;
    let prev: &mut Pointers = *(*p).prev;
    next.prev = prev as *mut Pointers;
    prev.next = next as *mut Pointers;
    (*p).next = ptr::null_mut();
    (*p).prev = ptr::null_mut();
    

    If slot is the only slot in the list, then next and prev both point to the WakerList and we’ve now formed two mut references to the same value, which is UB.

    In this particular case, we could ensure every pointer dereference occurs as a temporary. That limits the scope of each reference to each line.

    let p = &slot.pointers as *mut Pointers;
    let next = (*p).next;
    let prev = (*p).prev;
    (*next).prev = prev;
    (*prev).next = next;
    (*p).next = ptr::null_mut();
    (*p).prev = ptr::null_mut();
    

    But I just don’t trust myself to ensure, under all code paths, that I never have two references overlap, violating Rust’s aliasing rules.

    It’s kind of miserable, but the safest approach is to avoid creating references entirely and operate entirely in the domain of pointer reads, writes, and offsets.

    let nextp = addr_of_mut!((*node).next);
    let prevp = addr_of_mut!((*node).prev);
    let next = nextp.read();
    let prev = prevp.read();
    addr_of_mut!((*prev).next).write(next);
    addr_of_mut!((*next).prev).write(prev);
    nextp.write(ptr::null_mut());
    prevp.write(ptr::null_mut());
    

    (So much syntax. Makes you appreciate C.)

    addr_of_mut! is key: it computes a pointer to a place expression without forming a reference. There are gotchas: you can still accidentally form a reference within an addr_of_mut! argument. Read the documentation.

    Note: As I publish this, Rust 1.82 introduces new syntax that allows replacing addr_of_mut! with &raw mut and addr_of! with &raw const. It’s not yet clear to me how much this prevents accidental reference creation.

    Despite the noise, to be safe, I ended up converting all of WakerSet to pointer reads and writes. It’s not greppable: the code looks like it’s still full of pointer dereferences, but they’re within addr_of_mut! and the place expressions have the right shape.

    I think it was Patrick Walton who once proposed a sugar for unsafe Rust and pointers with a hypothetical -> operator. It would be convenient and easier on the eyes.

    Until the Rust memory model stabilizes further and the aliasing rules are well-defined, your best option is to integrate ASAN, TSAN, and MIRI (both stacked borrows and tree borrows) into your continuous integration for any project that contains unsafe code.

    If your project is safe Rust but depends on a crate which makes heavy use of unsafe code, you should probably still enable sanitizers. I didn’t discover all UB in wakerset until it was integrated into batch-channel.

    MIRI: Stacked Borrows and Tree Borrows

    MIRI supports two aliasing models: stacked borrows and tree borrows.

    I won’t attempt to describe them. They are different approaches with the same goal: formalize and validate the Rust memory model. Ralf Jung, Neven Villani, and all are doing amazing work. Without MIRI, it would be hard to trust unsafe Rust.

    I decided to run both stacked and tree borrows and haven’t hit any false positives so far.

    Active Research in Self-Referential Structures

    This topic is an active work in progress. I hope this blog post is obsolete in two years.

    Self-referential and pinned data structures are something of a hot topic right now. The Rust-for-Linux project, doing the kinds of things systems programs do, has Pin ergonomics and self-referential data structures near the top of their wishlist.

    In particular, pinned initialization problem. The Linux kernel has self-referential data structures and it is currently hard to initialize them with safe code. In wakerset, I sidestepped this problem at the cost of a small amount of runtime inefficiency by giving WakerList two empty states: one that is moveable and one that is pinned. The former converts to the latter the first time it is used after pinning.

    y86-dev has a great blog post proposing Safe Pinned Initialization.

    Conclusions

    As much as this post might come across as a gripefest, I still think Rust is great. In particular, its composable safety. The result of my pain is a safe, efficient API. You can use wakerset without any risk of undefined behavior.

    What I learned from this experience:

    • Be extra careful with any use of unsafe.
    • References, even if never used, are more dangerous than pointers in C.
    • Pinning syntax is awful, but it feels like Rust could solve this someday.
    • UnsafeCell is required for intrusive structures.
    • I don’t know how to statically constrain lifetime relationships with intrusive structures, but maybe it’s possible? Avoiding the need for runtime assertions would be nice.
    • MIRI, especially under multithreaded stress tests, is critical.
    • Putting this in words was as hard as writing the code.
  • Terminal Latency on Windows

    UPDATE 2024-04-15: Windows Terminal 1.19 contains a fix that reduces latency by half! It’s now competitive with WSLtty on my machine. Details in the GitHub Issue.

    In 2009, I wrote about why MinTTY is the best terminal on Windows. Even today, that post is one of my most popular.

    MinTTY in 2009
    MinTTY in 2009

    Since then, the terminal situation on Windows has improved:

    • Cygwin defaults to MinTTY; you no longer need to manually install it.
    • Windows added PTY support, obviating the need for offscreen console window hacks that add latency.
    • Windows added basically full support for ANSI terminal sequences in both the legacy conhost.exe consoles and its new Windows Terminal.
    • We now have a variety of terminals to choose from, even on Windows: Cmder, ConEmu, Alacritty, WezTerm, xterm.js (component of Visual Studio Code)

    The beginning of a year is a great time to look at your tools and improve your environment.

    I’d already enabled 24-bit color in all of my environments and streamlined my tmux config. It’s about time that I take a look at the newer terminals.

    Roughly in order, I care about:

    • Minimum feature set: 24-bit color, reasonable default fonts with emoji support, italics are nice.
    • Input latency.
    • Throughput at line rate, for example, when I cat a large file.
    • Support for multiple tabs in one window would be nice, but tmux suffices for me.

    Which terminals should I test?

    I considered the following.

    • Legacy conhost.exe (also known as Windows Console), Windows 10 19045
    • MinTTY (3.7.0)
    • Alacritty (0.13.1)
    • WezTerm (20240203-110809-5046fc22)
    • Windows Terminal (1.18.10301.0)

    Testing Features

    Testing color and italics support is easy with my colortest.rs script. To test basic emoji, you can cat the Unicode emoji 1.0 emoji-data.txt. To test more advanced support, try the zero-width joiner list in the latest/ directory.

    Terminal Emoji Font Attributes
    conhost.exe No No italics
    MinTTY Black and white All major attributes
    Alacritty Black and white Everything but double underline
    WezTerm Color All major attributes
    Windows Terminal Color All major attributes

    Everything but conhost.exe meets my bar.

    It’s also worth noting that conhost.exe has a terrible default palette. The default yellow is a pukey green and dark blue is barely visible. You can change palettes, but defaults matter.

    Conhost.exe Default Palette
    Conhost.exe Default Palette
    MinTTY Default Palette
    MinTTY Default Palette

    Latency

    I set up two latency tests. One with an 80x50 blank window in the upper left corner of the screen. The other fullscreen, editing an Emacs command at the bottom of the screen.

    Since latencies are additive, system configuration doesn’t matter as much as the absolute milliseconds of latency each terminal adds, but I’ll describe my entire setup and include total keypress-to-pixels latency.

    Measurement Methodology

    With Is It Snappy?, I measured the number of frames between pressing a key and pixels changing on the screen.

    To minimize ambiguity about when the key was pressed, I slammed a pencil’s eraser into the key, and always measured the key press as the second frame after contact. (The first frame was usually when the eraser barely touched the key. It would usually clear the activation depth by the second frame.)

    I considered the latency to end when pixels just started to change on the screen. In practice, pixels take several 240 Hz frames to transition from black to white, but I consistently marked the beginning of that transition.

    I took five measurements for each configuration and picked the median. Each measurement was relatively consistent, so average would have been a fine metric too. It doesn’t change the results below.

    80x50

    80x50 window, upper left of screen, cleared terminal, single keypress.

    Confirmed window size with:

    $ echo $(tput cols)x$(tput lines)
    80x50
    
    Terminal Median Latency (ms) 240 Hz Camera Frames
    conhost.exe WSL1 33.3 8
    MinTTY WSL1 33.3 8
    conhost.exe Cygwin 41.3 10
    MinTTY Cygwin 57.9 14
    WezTerm cmd.exe 62.5 15
    Alacritty WSL1 62.5 15
    WezTerm WSL1 66.7 16
    Windows Terminal WSL1 66.7 16

    Fullscreen

    Maximized emacs, editing a command in the bottom row of the terminal. I only tested WSL1 this time.

    Terminal Median Latency (ms) 240 Hz Camera Frames
    conhost.exe 45.8 11
    MinTTY 52.42 12
    WezTerm 75 18
    Windows Terminal 75 18
    Alacritty 87.5 21

    Throughput

    I generated a 100,000-line file with:

    $ yes "This sentence has forty-five (45) characters." | head -n 100000 > /tmp/lines.txt
    

    Then I measured the wall-clock duration of:

    $ time cat /tmp/lines.txt
    

    This benchmark captures the case that I accidentally dump a ton of output and I’m sitting there just waiting for the terminal to become responsive again. I have a gigabit internet connection, and it’s embarrassing to be CPU-bound instead of IO-bound.

    I did include Cygwin in this test, just to have two different MinTTY datapoints.

    Terminal Elapsed Time (s)
    MinTTY WSL1 0.57
    MinTTY Cygwin 2.2
    Windows Terminal 5.25
    Alacritty 5.75
    WezTerm 6.2
    conhost.exe 21.8

    I assume this means MinTTY throttles display updates in some way. Of course this is totally fine, because you couldn’t read the output either way.

    To test the hypothesis that MinTTY was caching cell rendering by their contents, I also tried generating a file that rotated through different lines, with no effect.

    with open("/tmp/lines2.txt", "w") as f:
      for i in range(100000):
        sentence="This sentence has forty-five (45) characters."
        print(sentence[i%len(sentence):]+sentence[:i%len(sentence)], file=f)
    

    CPU Usage During Repeated Keypresses

    While making these measurements, I noticed some strange behaviors. My monitor runs at 120 Hz and animation and window dragging are generally smooth. But right after you start Alacritty, dragging the window animates at something like 30-60 frames per second. It’s noticeably chunkier. WezTerm does the same, but slightly worse. Maybe 20 frames per second.

    I don’t know if I can blame the terminals themselves, because I sometimes experience this even with Notepad.exe too. But the choppiness stands out much more. Maybe something is CPU-bound in responding to window events?

    This made me think of a new test: if I open a terminal and hold down the “a” button on autorepeat, how much CPU does the terminal consume?

    To measure this, I set the terminal process’s affinity to my third physical core, and watched the CPU usage graph in Task Manager. Not a great methodology, but it gave a rough sense. Again, 80x50.

    Terminal Percent of Core Private Bytes After Startup (KiB)
    conhost 0% 6,500
    Alacritty 5% 74,000
    MinTTY WSL1 10% 10,200
    MinTTY Cygwin 10% 10,500
    Windows Terminal 20% 73,700
    WezTerm 85% 134,000

    The WezTerm CPU usage has to be a bug. I’ll report it.

    CPU Usage (Idle)

    I often have a pile of idle terminals sitting around. I don’t want them to chew battery life. So let’s take a look at CPU Cycles Delta (courtesy of Process Explorer) with a fresh, idle WSL session.

    Terminal Idle Cycles/s (Focused) Idle Cycles/s (Background)
    conhost ~900,000 0
    Alacritty ~2,400,000 no difference
    WezTerm ~2,600,000 ~1,600,000
    Windows Terminal ~55,000,000 ~6,100,000
    MinTTY WSL1 ~120,000,000 no difference
    MinTTY Cygwin ~120,000,000 no difference

    These numbers aren’t great at all! For perspective, I have a pile of Firefox tabs open, some of them actively running JavaScript, and they’re “only” using a few hundred million cycles per second.

    Raymond Chen once wrote a blog post about the importance of properly idling in the Windows Terminal Server days. You might have a dozen users logged into a host, and if a program is actively polling, it’s eating performance that others could use.

    Today, we often run on batteries, so idling correctly still matters, but it seems to be something of a lost art. The only terminal that idles completely is the old conhost.exe.

    The other lesson we can draw is that Microsoft’s own replacement for conhost.exe, Windows Terminal, uses over 10x the RAM, 60x the CPU when focused, and infinitely more CPU when idle.

    Conclusions

    conhost.exe consistently has the best latency, with MinTTY not much behind. MinTTY handily dominates the throughput test, supports all major ANSI character attributes, and has a better default palette.

    As in 2009, I’d say MinTTY is still pretty great. (I should try to track down that idle CPU consumption. It feels more like a bug than a requirement.)

    If you want to use MinTTY as the default terminal for WSL, install WSLtty.

    The others all have slightly worse latencies, but they’re in a similar class. I’m particularly sensitive to latency, so I’d had a suspicion even before measuring. Maybe it’s some consequence of being GPU-accelerated? Out of curiousity, I put Windows Terminal in software-rendered mode, and it shaved perhaps 4 ms off (median of 62.5 ms, 15 frames). Perhaps just measurement noise.

    While I’m going to stick with MinTTY, one thing is clear: there is room to improve all of the above.

  • My Minimal tmux Config

    If you spend any significant time in a terminal, you’ve probably used tmux.

    I’m writing this post for a few reasons:

    • People have asked for my config.
    • I see too many people wasting their time in the morning, rebuilding their session from the previous day.
    • I felt I should justify the configuration to myself rather than setting options ad-hoc.

    tmux is often paired with a persistent connection. There are two popular choices: Eternal Terminal and Mosh. The goal is to close your laptop at the end of the day, open it the next morning, and have everything where it was so you can immediately get back into flow.

    Note: There are other options. WezTerm has built-in multiplexing, for example.

    macOS + iTerm2 + Eternal Terminal + tmux Control Mode

    If you use macOS, iTerm2 has deep tmux integration in the form of tmux Control Mode.

    tmux windows map to iTerm2 tabs. Native tab navigation and scrollback (both scrolling and find) work just as you’d expect.

    tmux control mode does expect a reliable stream channel, so if you want a connection that persists even when network connections are dropped, Mosh will not work. You’ll need Eternal Terminal.

    If you use a Mac, this is an excellent configuration. I worked on EdenFS and Watchman for almost five years this way.

    mosh + tmux

    But now I use Windows and Linux and can’t use iTerm2, so tmux within Mosh it is.

    I find tmux’s default keybindings a little awkward, and the colors simultaneously harsh and too minimal, so I made a configuration to match my tastes.

    You can download the full .tmux.conf here.

    (You can go crazy, but avoiding too much fanciness was my goal. If you want all the bling, install the Tmux Plugin Manager and things like tmux-powerline.

    .tmux.conf

    First, a small demonstration.

    Despite all of the work I put into my recent post about 24-bit color in terminals, I do still use some with limited color support. macOS’s Terminal.app only supports the 256-color palette, and the Linux console only really supports 8. The following selects the correct tmux terminfo entry.

    # Detect the correct TERM value for new sessions.
    # if-shell uses /bin/sh, so bashisms like [[ do not work.
    if "[ $(tput colors) = 16777216 ]" {
      set -g default-terminal "tmux-direct"
    } {
      if "[ $(tput colors) = 256 ]" {
        set -g default-terminal "tmux-256color"
      } {
        set -g default-terminal "tmux"
      }
    }
    

    I prefer Emacs keybindings in both bash (readline) and tmux.

    setw -g mode-keys emacs
    

    The next setting is more legacy terminal insanity. On some (most?) terminals, programs cannot differentiate between a user pressing the escape key and the beginning of an escape sequence. readline and tmux default to 500 ms, which adds noticeable latency in some terminals when using programs like vi.

    There’s no correct value here. Ideally, your terminal would use an unambiguous code for the escape key, like MinTTY.

    set -s escape-time 200
    

    Let’s not be stingy with scrollback! Searching lots of history is worth spending megabytes.

    # I can afford 50 MB of scrollback.
    # Measured on WSL 1 with:
    # yes $(python3 -c "print('y' * 80)")
    set -g history-limit 100000
    

    By default, if multiple clients connect to one tmux session, tmux will resize all of the windows to the smallest connected terminal.

    This behavior is annoying, and it’s always an accident. Sometimes I’ll leave a temporary connection to a server from home and then another fullscreen connection from work will cram each window into 80x25.

    The aggressive-resize option applies this logic only to the currently-viewed window, not everything in the session.

    setw -g aggressive-resize on
    

    Window titles don’t automatically forward to the whatever graphical terminal you’re using. Do that, and add the hostname, but keep it concise.

    set -g set-titles on
    set -g set-titles-string "#h: #W"
    

    iTerm2 has this nice behavior where active tabs are visually marked so you can see, at a glance, which had recent activity. The following two options offer similar behavior. Setting activity-action to none disables any audible ding or visible flash, leaving just a subtle indication in the status bar.

    set -g monitor-activity on
    set -g activity-action none
    

    The following is perhaps the most important part of my configuration: tab management. Like browsers and iTerm2, I want my tabs numbered. I want a single (modified) keypress to select a tab, and I want tabs automatically renumbered as they’re created, destroyed, and reordered.

    I also want iTerm2-style previous- and next-tab keybindings.

    # Match window numbers to the order of the keys on a keyboard.
    set -g base-index 1
    setw -g pane-base-index 1
    
    setw -g renumber-windows on
    
    # My tmux muscle memory still wants C-b 0 to select the first window.
    bind 0 select-window -t ":^"
    # Other terminals and web browsers use 9 to focus the final tab.
    bind 9 select-window -t ":$"
    
    bind -n "M-0" select-window -t ":^"
    bind -n "M-1" select-window -t ":1"
    bind -n "M-2" select-window -t ":2"
    bind -n "M-3" select-window -t ":3"
    bind -n "M-4" select-window -t ":4"
    bind -n "M-5" select-window -t ":5"
    bind -n "M-6" select-window -t ":6"
    bind -n "M-7" select-window -t ":7"
    bind -n "M-8" select-window -t ":8"
    # Browsers also select last tab with M-9.
    bind -n "M-9" select-window -t ":$"
    # Match iTerm2.
    bind -n "M-{" previous-window
    bind -n "M-}" next-window
    

    Note that Emacs assigns meaning to Alt-number. If it matters to you, pick a different modifier.

    Now let’s optimize the window ordering. By default, C-b c creates a new window at the end. That’s a fine default. But sometimes I want a new window right after the current one, so define C-b C-c. Also, add some key bindings for sliding the current window around.

    bind "C-c" new-window -a
    
    bind "S-Left" {
      swap-window -t -1
      select-window -t -1
    }
    bind "S-Right" {
      swap-window -t +1
      select-window -t +1
    }
    

    I wanted “C-{“ and “C-}” but terminal key encoding doesn’t work like that.

    Next, let’s define some additional key bindings for very common operations.

    By default, searching in the scrollback requires entering “copy mode” with C-b [ and then entering reverse search mode with C-r. Searching is common, so give it a dedicated C-b r.

    bind r {
      copy-mode
      command-prompt -i -p "(search up)" \
        "send-keys -X search-backward-incremental '%%%'"
    }
    

    And some convenient toggles:

    # Toggle terminal mouse support.
    bind m set-option -g mouse \; display "Mouse: #{?mouse,ON,OFF}"
    
    # Toggle status bar. Useful for fullscreen focus.
    bind t set-option status
    

    Now the status bar. The default status bar is okay, but we can do better.

    tmux status bar: before
    tmux status bar: before
    • Move the tmux session ID next to the hostname on the right side.
    • Move the current time to the far right corner.
    • Keep the date, but I think I can remember what year it is.
    • Ensure there is a single space between the windows and the left edge. Without a space at the edge, it looks weird.
    tmux status bar: after
    tmux status bar: after

    The other half of that improvement is the color scheme. Instead of a harsh black-on-green, I chose a scheme that evokes old amber CRT phosphors or gas plasma displays My dad had a “laptop” with one of those when I was young.

    The following color scheme mildly highlights the current window and uses a dark blue for the hostname-and-time section. These colors don’t distract me when I’m not working, but if I do look, the important information is there.

    if "[ $(tput colors) -ge 256 ]" {
      set -g status-left-style "fg=black bg=colour130"
      set -g status-right-style "bg=colour17 fg=orange"
      set -g status-style "fg=black bg=colour130"
      set -g message-style "fg=black bg=colour172"
      # Current window should be slightly brighter.
      set -g window-status-current-style "fg=black bg=colour172"
      # Windows with activity should be very subtly highlighted.
      set -g window-status-activity-style "fg=colour17 bg=colour130"
      set -g mode-style "fg=black bg=colour172"
    }
    

    And that’s it!

    Again, feel free to copy the complete .tmux.conf.

    Shell Integration

    There’s one more config to mention: adding some shell aliases to .bashrc.

    I sometimes want to look at or edit a file right next to my shell.

    if [[ "$TMUX" ]]; then
        function lv() {
            tmux split-window -h less "$@"
        }
        function ev() {
            tmux split-window -h emacs "$@"
        }
        function lh() {
            tmux split-window -v less "$@"
        }
        function eh() {
            tmux split-window -v emacs "$@"
        }
    fi
    

    (You may notice the aliases use different meanings of horizontal and vertical than tmux. I don’t know, it feels like tmux is backwards, but that could be my brain.)

    Happy multiplexing!

  • I Just Wanted Emacs to Look Nice — Using 24-Bit Color in Terminals

    Thanks to some coworkers and David Wilson’s Emacs from Scratch playlist, I’ve been getting back into Emacs. The community is more vibrant than the last time I looked, and LSP brings modern completion and inline type checking.

    David’s Emacs looks so fancy — I want nice colors and fonts too, especially my preferred themes like Solarized.

    From desktop environments, Emacs automatically supports 24-bit color.

    Graphical Emacs: Fonts and Colors
    Graphical Emacs: Fonts and Colors

    But, since I work on infrastructure, I’ve lived primarily in terminals for years. And my Emacs looks like:

    Terminal Emacs: Not Fancy
    Terminal Emacs: Not Fancy

    It turns out, for years, popular terminals have supported 24-bit color. And yet they’re rarely used.

    Like everything else, it boil down to legacy and politics. Control codes are a protocol, and changes to that protocol take time to propagate, especially with missteps along the way.

    This post is two things:

    1. how to enable true-color support in the terminal environments I use, and
    2. how my desire for nice colors in Emacs led to poring over technical standards from the 70s, 80s, and 90s, wondering how we got to this point.

    NOTE: I did my best, but please forgive any terminology slip-ups or false histories. I grew up on VGA text mode UIs, but never used a hardware terminal and wasn’t introduced to unix until much later.

    ANSI Escape Codes

    Early hardware terminals offered their own, incompatible, control code schemes. That made writing portable software hard, so ANSI standardized the protocol, while reserving room for expansion and vendor-specific capabilities.

    DEC
VT100 (1978)
    DEC VT100 (1978)

    ANSI escape codes date back to the 70s. They cover a huge range of functionality, but since this post is focused on colors, I’m mostly interested in SGR (Select Graphics Rendition), which allows configuring a variety of character display attributes:

    • bold or intensity
    • italics (not frequently supported)
    • blink
    • foreground and background colors
    • and a bunch of other stuff. You can look at Wikipedia.

    3-, 4-, and 8-bit Color

    When color was introduced, there were eight. Black, white, the additive primaries, and the subtractive primaries. The eight corners of an RGB color cube.

    Later, a bright (or bold) bit added eight more; “bright black” being dark gray.

    4-Bit VGA Text Mode Palette
    4-Bit VGA Text Mode Palette

    In 1999, Todd Larason patched xterm to add support for 256 colors. He chose a palette that filled out the RGB color cube with a 6x6x6 interior sampling and added a 24-entry finer-precision grayscale ramp.

    Output From colortest-256
    Output From colortest-256

    NOTE: There’s a rare, but still-supported, 88-color variant with a 4x4x4 color cube and 8-entry grayscale ramp, primarily to reduce the use of historically-limited X11 color objects.

    NOTE: We’ll debug this later, but Todd’s patch to add 256-color support to xterm used semicolons as the separator between the ANSI SGR command 48 and the color index, which set off a chain reaction of ambiguity we’re still dealing with today.

    Where Did 24-Bit Color Support Come From?

    It’s well-documented how to send 8-bit and 24-bit colors to compatible terminals. Per Wikipedia:

    ESC[38;5;<n>m sets foreground color n per the palettes above.

    ESC[38;2;<r>;<g>;<b>m sets foreground color (r, g, b).

    (Again, that confusion about semicolons vs. colons, and an unused colorspace ID if colons are used. We’ll get to the bottom of that soon.)

    But why 5? Why 2? How did any of this come about? I’d struggled enough with unexpected output that it was time to discover the ground truth.

    Finding and reading original sources led me to construct the following narrative:

    • In the 70s, ANSI standardized terminal escape sequences, resulting in ANSI X3.64 and the better-known ECMA-48.
    • The first edition of ECMA-48 is lost to time, but it probably looks much like ANSI X3.64.
    • The 2nd edition of ECMA-48 (1979) allocated SGR parameters 30-37 and 40-47 for setting 3-bit foreground and background colors, respectively.
      • By the way, these standards use the word “parameter” to mean command, and “subparameter” to mean argument, if applicable.
    • The 3rd edition (1984) introduced the concept of an implementation-defined default color for both foreground and background, and allocated parameters 39 and 49, respectively.
    • Somewhere in this timeline, vendors did ship hardware terminals with richer color support. The Wyse WY-370 introduced new color modes, including a direct-indexed 64-color palette. (See Page 86 of its Programmer’s Guide.)
    • 38 and 48 are the most important parameters for selecting colors today, but they weren’t allocated by either the 4th (1986) or 5th (1991) editions. So where did they come from? The 5th edition gives a clue:

      reserved for future standardization; intended for setting character foreground colour as specified in ISO 8613-6 [CCITT Recommendation T.416]

    • ISO 8613 was a boondoggle of a project intended to standardize and replace all proprietary document file formats. You’ve never heard of it, so it obviously failed. But its legacy lives on – ISO 8613-6 (ITU T.416) (1993) built on ECMA-48’s codes and defined parameters 38 and 48 as extended foreground and background color modes, respectively.

      The first parameter element indicates a choice between:

      • 0 implementation defined (only applicable for the character foreground colour)
      • 1 transparent;
      • 2 direct colour in RGB space;
      • 3 direct colour in CMY space;
      • 4 direct colour in CMYK space;
      • 5 indexed colour.

    There we go! That is why 5 is used for 256-color mode and 2 is 24-bit RGB.

    Careful reading also gives a clue as to the semicolon vs. colon syntax screw-up. Note the subtle use of the term “parameter element” vs. “parameter”.

    If you read ISO 8613-6 (ITU T.416) and ECMA-48 closely, it’s not explicitly stated, but they seem to indicate that unknown parameters for commands like “select graphics rendition” should be ignored. And parameters are separated with semicolons.

    That implies ESC[38;5;3m should be interpreted, in terminals that don’t support SGR 38, as “unknown, ignored (38)”, “blinking (5)”, and “italicized (3)”. The syntax should use colons to separate sub-parameter components, but something got lost along the way.

    (Now, in practice, programs are told how to communicate with their terminals via the TERM variable and the terminfo database, so I don’t know how much pain occurs in reality.)

    Thomas Dickey has done a great job documenting the history of ncurses and xterm, and, lo and behold, explains exactly the origin of the ambiguous syntax:

    We used semicolon (like other SGR parameters) for separating the R/G/B values in the escape sequence, since a copy of ITU T.416 (ISO-8613-6) which presumably clarified the use of colon for this feature was costly.

    Using semicolon was incorrect because some applications could expect their parameters to be order-independent. As used for the R/G/B values, that was order-dependent. The relevant information, by the way, is part of ECMA-48 (not ITU T.416, as mentioned in Why only 16 (or 256) colors?). Quoting from section 5.4.2 of ECMA-48, page 12, and adding emphasis (not in the standard):

    Each parameter sub-string consists of one or more bit combinations from 03/00 to 03/10; the bit combinations from 03/00 to 03/09 represent the digits ZERO to NINE; bit combination 03/10 may be used as a separator in a parameter sub-string, for example, to separate the fractional part of a decimal number from the integer part of that number.

    and later on page 78, in 8.3.117 SGR – SELECT GRAPHIC RENDITION, the description of SGR 38:

    (reserved for future standardization; intended for setting character foreground colour as specified in ISO 8613-6 [CCITT Recommendation T.416])

    Of course you will immediately recognize that 03/10 is ASCII colon, and that ISO 8613-6 necessarily refers to the encoding in a parameter sub-string. Or perhaps you will not.

    So it’s all because the ANSI and ISO standards are ridiculously expensive (to this day, these crappy PDF scans from the 90s and earlier are $200 USD!) and because they use a baroque syntax to denote ASCII characters. While writing this post, I had to keep man ascii open to match, for example, 03/10 to colon and 03/11 to semicolon. I guess it’s how standards were written back then. A Hacker News thread in the context of WezTerm gives more detail.

    So, to recap in the timeline:

    Okay, here’s what we’ve established:

    • ANSI codes are widely supported, even on Windows.
    • Truecolor support is either widely supported or (for example, on the Linux text mode terminal) at least recognized and mapped to a more limited palette.
    • Semicolon syntax is the most compatible, though the unambiguous colon syntax is slowly spreading.

    I wrote a small colortest.rs program to test color support and attributes like reverse and italics to confirm the above in every terminal I use.

    Terminfo

    Now that we’ve established terminal capabilities and how to use them, the next trick is to convince software of varying lineages to detect and use the best color support available.

    Typically, this is done with the old terminfo library (or the even older termcap).

    Terminfo provides a database of terminal capabilities and the ability to generate appropriate escape sequences. The TERM environment variable tells programs which terminfo record to use. Its value is automatically forwarded over ssh connections.

    Terminfo uses ridiculous command names: infocmp, tic, toe. (Not to be confused with the unrelated tac.)

    To see the list of terminfo records installed on your host, run toe -a. (Do we /really/ need to install support for every legacy hardware terminal on modern machines? Good luck even finding a hardware terminal these days. They’re collector’s items.)

    infocmp is how you inspect the capabilities of a specific terminfo record.

    $ infocmp xterm-256color
    #       Reconstructed via infocmp from file: /lib/terminfo/x/xterm-256color
    xterm-256color|xterm with 256 colors,
            am, bce, ccc, km, mc5i, mir, msgr, npc, xenl,
            colors#0x100, cols#80, it#8, lines#24, pairs#0x10000,
            acsc=``aaffggiijjkkllmmnnooppqqrrssttuuvvwwxxyyzz{{||}}~~,
            bel=^G, blink=\E[5m, bold=\E[1m, cbt=\E[Z, civis=\E[?25l,
            clear=\E[H\E[2J, cnorm=\E[?12l\E[?25h, cr=\r,
            csr=\E[%i%p1%d;%p2%dr, cub=\E[%p1%dD, cub1=^H,
            cud=\E[%p1%dB, cud1=\n, cuf=\E[%p1%dC, cuf1=\E[C,
            cup=\E[%i%p1%d;%p2%dH, cuu=\E[%p1%dA, cuu1=\E[A,
            cvvis=\E[?12;25h, dch=\E[%p1%dP, dch1=\E[P, dim=\E[2m,
            dl=\E[%p1%dM, dl1=\E[M, ech=\E[%p1%dX, ed=\E[J, el=\E[K,
            el1=\E[1K, flash=\E[?5h$<100/>\E[?5l, home=\E[H,
            hpa=\E[%i%p1%dG, ht=^I, hts=\EH, ich=\E[%p1%d@,
            il=\E[%p1%dL, il1=\E[L, ind=\n, indn=\E[%p1%dS,
            initc=\E]4;%p1%d;rgb:%p2%{255}%*%{1000}%/%2.2X/%p3%{255}%*%{1000}%/%2.2X/%p4%{255}%*%{1000}%/%2.2X\E\\,
            invis=\E[8m, is2=\E[!p\E[?3;4l\E[4l\E>, kDC=\E[3;2~,
            kEND=\E[1;2F, kHOM=\E[1;2H, kIC=\E[2;2~, kLFT=\E[1;2D,
            kNXT=\E[6;2~, kPRV=\E[5;2~, kRIT=\E[1;2C, ka1=\EOw,
            ka3=\EOy, kb2=\EOu, kbeg=\EOE, kbs=^?, kc1=\EOq, kc3=\EOs,
            kcbt=\E[Z, kcub1=\EOD, kcud1=\EOB, kcuf1=\EOC, kcuu1=\EOA,
            kdch1=\E[3~, kend=\EOF, kent=\EOM, kf1=\EOP, kf10=\E[21~,
            kf11=\E[23~, kf12=\E[24~, kf13=\E[1;2P, kf14=\E[1;2Q,
            kf15=\E[1;2R, kf16=\E[1;2S, kf17=\E[15;2~, kf18=\E[17;2~,
            kf19=\E[18;2~, kf2=\EOQ, kf20=\E[19;2~, kf21=\E[20;2~,
            kf22=\E[21;2~, kf23=\E[23;2~, kf24=\E[24;2~,
            kf25=\E[1;5P, kf26=\E[1;5Q, kf27=\E[1;5R, kf28=\E[1;5S,
            kf29=\E[15;5~, kf3=\EOR, kf30=\E[17;5~, kf31=\E[18;5~,
            kf32=\E[19;5~, kf33=\E[20;5~, kf34=\E[21;5~,
            kf35=\E[23;5~, kf36=\E[24;5~, kf37=\E[1;6P, kf38=\E[1;6Q,
            kf39=\E[1;6R, kf4=\EOS, kf40=\E[1;6S, kf41=\E[15;6~,
            kf42=\E[17;6~, kf43=\E[18;6~, kf44=\E[19;6~,
            kf45=\E[20;6~, kf46=\E[21;6~, kf47=\E[23;6~,
            kf48=\E[24;6~, kf49=\E[1;3P, kf5=\E[15~, kf50=\E[1;3Q,
            kf51=\E[1;3R, kf52=\E[1;3S, kf53=\E[15;3~, kf54=\E[17;3~,
            kf55=\E[18;3~, kf56=\E[19;3~, kf57=\E[20;3~,
            kf58=\E[21;3~, kf59=\E[23;3~, kf6=\E[17~, kf60=\E[24;3~,
            kf61=\E[1;4P, kf62=\E[1;4Q, kf63=\E[1;4R, kf7=\E[18~,
            kf8=\E[19~, kf9=\E[20~, khome=\EOH, kich1=\E[2~,
            kind=\E[1;2B, kmous=\E[<, knp=\E[6~, kpp=\E[5~,
            kri=\E[1;2A, mc0=\E[i, mc4=\E[4i, mc5=\E[5i, meml=\El,
            memu=\Em, mgc=\E[?69l, nel=\EE, oc=\E]104\007,
            op=\E[39;49m, rc=\E8, rep=%p1%c\E[%p2%{1}%-%db,
            rev=\E[7m, ri=\EM, rin=\E[%p1%dT, ritm=\E[23m, rmacs=\E(B,
            rmam=\E[?7l, rmcup=\E[?1049l\E[23;0;0t, rmir=\E[4l,
            rmkx=\E[?1l\E>, rmm=\E[?1034l, rmso=\E[27m, rmul=\E[24m,
            rs1=\Ec\E]104\007, rs2=\E[!p\E[?3;4l\E[4l\E>, sc=\E7,
            setab=\E[%?%p1%{8}%<%t4%p1%d%e%p1%{16}%<%t10%p1%{8}%-%d%e48;5;%p1%d%;m,
            setaf=\E[%?%p1%{8}%<%t3%p1%d%e%p1%{16}%<%t9%p1%{8}%-%d%e38;5;%p1%d%;m,
            sgr=%?%p9%t\E(0%e\E(B%;\E[0%?%p6%t;1%;%?%p5%t;2%;%?%p2%t;4%;%?%p1%p3%|%t;7%;%?%p4%t;5%;%?%p7%t;8%;m,
            sgr0=\E(B\E[m, sitm=\E[3m, smacs=\E(0, smam=\E[?7h,
            smcup=\E[?1049h\E[22;0;0t, smglp=\E[?69h\E[%i%p1%ds,
            smglr=\E[?69h\E[%i%p1%d;%p2%ds,
            smgrp=\E[?69h\E[%i;%p1%ds, smir=\E[4h, smkx=\E[?1h\E=,
            smm=\E[?1034h, smso=\E[7m, smul=\E[4m, tbc=\E[3g,
            u6=\E[%i%d;%dR, u7=\E[6n, u8=\E[?%[;0123456789]c,
            u9=\E[c, vpa=\E[%i%p1%dd,
    

    There’s so much junk in there. I wonder how much only applies to non-ANSI hardware terminals, and therefore is irrelevant these days.

    For now, we’re only interested in three of these capabilities:

    • colors is how many colors this terminal supports. The standard values are 0, 8, 16, 256, and 0x1000000 (24-bit), though other values exist.
    • setaf and setab set foreground and background colors, respectively. I believe they stand for “Set ANSI Foreground” and “Set ANSI Background”. Each takes a single argument, the color number.

    Those percent signs are a parameter arithmetic and substitution language. Let’s decode setaf in particular:

    setaf=\E[%?%p1%{8}%<%t3%p1%d%e%p1%{16}%<%t9%p1%{8}%-%d%e38;5;%p1%d%;m
    
    print "\E["
    if p1 < 8 {
      print "3" p1
    } else if p1 < 16 {
      print "9" (p1 - 8)
    } else {
      print "38;5;" p1
    }
    print "m"
    

    This is the xterm-256color terminfo description. It only knows how to output the ANSI 30-37 SGR parameters, the non-standard 90-97 brights (from IBM AIX), or otherwise the 256-entry palette, using ambiguous semicolon-delimited syntax.

    Let’s compare with xterm-direct, the terminfo entry that supports RGB.

    $ infocmp xterm-256color xterm-direct
    comparing xterm-256color to xterm-direct.
        comparing booleans.
            ccc: T:F.
        comparing numbers.
            colors: 256, 16777216.
        comparing strings.
            initc: '\E]4;%p1%d;rgb:%p2%{255}%*%{1000}%/%2.2X/%p3%{255}%*%{1000}%/%2.2X/%p4%{255}%*%{1000}%/%2.2X\E\\', NULL.
            oc: '\E]104\007', NULL.
            rs1: '\Ec\E]104\007', '\Ec'.
            setab: '\E[%?%p1%{8}%<%t4%p1%d%e%p1%{16}%<%t10%p1%{8}%-%d%e48;5;%p1%d%;m', '\E[%?%p1%{8}%<%t4%p1%d%e48:2::%p1%{65536}%/%d:%p1%{256}%/%{255}%&%d:%p1%{255}%&%d%;m'.
            setaf: '\E[%?%p1%{8}%<%t3%p1%d%e%p1%{16}%<%t9%p1%{8}%-%d%e38;5;%p1%d%;m', '\E[%?%p1%{8}%<%t3%p1%d%e38:2::%p1%{65536}%/%d:%p1%{256}%/%{255}%&%d:%p1%{255}%&%d%;m'.
    

    A few things are notable:

    • xterm-direct advertises 16.7 million colors, as expected.
    • xterm-direct unsets the ccc boolean, which indicates color indices cannot have new RGB values assigned.
    • Correspondingly, xterm-direct unsets initc, oc, and rs1, also related to changing color values at runtime.
    • And of course setaf and setab change. We’ll decode that next.

    Here’s where Terminfo’s limitations cause us trouble. Terminfo and ncurses are tied at the hip. Their programming model is that there are N palette entries, each of which has a default RGB value, and terminals may support overriding any palette entry’s RGB value.

    The -direct terminals, however, are different. They represent 24-bit colors by pretending there are 16.7 million palette entries, each of which maps to the 8:8:8 RGB cube, but whose values cannot be changed.

    Now let’s look at the new setaf:

    print "\E["
    if p1 < 8 {
      print "3" p1
    } else {
      print "38:2::" (p1 / 65536) ":" ((p1 / 256) & 255) ":" (p1 & 255)
    }
    print "m"
    

    It’s not quite as simple as direct RGB. For compatibility with programs that assume the meaning of setaf, this scheme steals the darkest 7 blues, not including black, and uses them for compatibility with the basic ANSI 8 colors. Otherwise, there’s a risk of legacy programs outputting barely-visible dark blues instead of the ANSI colors they expect.

    One consequence is that the -direct schemes are incompatible with the -256color schemes, so programs must be aware that 256 colors means indexed and 16.7 million means direct, except that the darkest 7 blues are to be avoided.

    Fundamentally, terminfo has no notion of color space. So a program that was written before terminfo even supported more colors than 256 might (validly!) assume the values of the first 8, 16, or even 256 palette entries.

    This explains an issue with the Rust crate termwiz that I recently ran into at work. A program expected to output colors in the xterm-256color palette, but was actually generating various illegibly-dark shades of blue. (Note: Despite the fact that the issue is open as of this writing, @quark-zju landed a fix, so current termwiz behaves reasonably.)

    This is a terminfo restriction, not a terminal restriction. As far as I know, every terminal that supports 24-bit color also supports the xterm 256-color palette and even dynamically changing their RGB values. (You can even animate the palette like The Secret of Monkey Island did!) While I appreciate Thomas Dickey’s dedication to accurately documenting history and preserving compatibility, terminfo simply isn’t great at accurate and timely descriptions of today’s vibrant ecosystem of terminal emulators.

    Kovid Goyal, author of kitty, expresses his frustration:

    To summarize, one cannot have both 256 and direct color support in one terminfo file.

    Frustrated users of the ncurses library have only themselves to blame, for choosing to use such a bad library.

    A deeper, more accurate discussion of the challenges are documented in kitty issue #879.

    In an ideal world, terminfo would have introduced a brand new capability for 24-bit RGB, leaving the adjustable 256-color palette in place.

    Modern programs should probably disregard most of terminfo and assume that 16.7 million colors implies support for the rest of the color capabilities. And maybe generate their own ANSI-compatible escape sequences… except for the next wrinkle.

    Setting TERM: Semicolons Again!

    Gripes about terminfo aside, everyone uses it, so we do need to ensure TERM is set correctly.

    While I’d like to standardize on the colon-based SGR syntax, several terminals I use only support semicolons:

    • Conhost, Windows’s built-in console.
    • Mintty claims to work (and wsltty does), but for some reason running my colortest.rs program from Cygwin only works with semicolon syntax, unless I pipe the output through cat or a file. There must be some kind of magic translation happening under the hood. I haven’t debugged.
    • Mosh is aware, but hasn’t added support.
    • PuTTY.
    • Ubuntu 22.04 LTS ships a version of Konsole that only supports semicolons.

    Terminfo entries are built from “building blocks”, marked with a plus. xterm+direct is the building block for the standard colon-delimited syntax. xterm+indirect is the building block for legacy terminals that only support semicolon syntax.

    Searching for xterm+indirect shows which terminfo entries might work for me. vscode-direct looks the most accurate. I assume that, since it targets a Microsoft terminal, it’s probably close enough in functionality to Windows Terminal and Windows Console. I have not audited all capabilities, but it seems to work.

    The next issue was that none of my servers had the -direct terminfo entries installed! On most systems, the terminfo database comes from the ncurses-base package, but you need ncurses-term for the extended set of terminals.

    At work, we can configure a default set of installed packages for your hosts, but I have to install them manually on my unmanaged personal home machines. Also, I was still running Ubuntu 18, so I had to upgrade to a version that contained the -direct terminfo entries. (Of course, two of my headless machines failed to boot after upgrading, but that’s a different story.)

    Unfortunately, there is no terminfo entry for the Windows console. Since I started writing this post, ncurses introduced a winconsole terminfo entry, but it neither supports 24-bit color nor is released in any ncurses version.

    Configuring Emacs

    Emacs documents how it detects truecolor support.

    I find it helpful to M-x eval-expression (display-color-cells) to confirm whether Emacs sees 16.7 million colors.

    Emacs also documents the -direct mode terminfo limitation described above:

    Terminals with ‘RGB’ capability treat pixels #000001 - #000007 as indexed colors to maintain backward compatibility with applications that are unaware of direct color mode. Therefore the seven darkest blue shades may not be available. If this is a problem, you can always use custom terminal definition with ‘setb24’ and ‘setf24’.

    It’s worth noting that RGB is Emacs’s fallback capability. Emacs looks for the setf24 and setb24 strings first, but no terminfo entries on my machine contain those capabilities:

    $ for t in $(toe -a | cut -f1); do
        if (infocmp "$t" | grep 'setf24') > /dev/null; then
          echo "$t";
        fi;
    done
    $
    

    Nesting Terminals

    conhost.exe (WSL1)
    +-------------------------+
    | mosh                    |
    | +---------------------+ |
    | | tmux                | |
    | | +-----------------+ | |
    | | | emacs terminal  | | |
    | | | +-------------+ | | |
    | | | | $ ls        | | | |
    | | | | foo bar baz | | | |
    | | | +-------------+ | | |
    | | +-----------------+ | |
    | +---------------------+ |
    +-------------------------+
    

    I’d never consciously considered this, but my typical workflow nests multiple terminals.

    • I open a graphical terminal emulator on my local desktop, Windows, Mac, or Linux.
    • I mosh to a remote machine or VM.
    • I start tmux.
    • I might then use a terminal within Emacs or Asciinema or GNU Screen.
      • Yes, there are situations where it’s useful to have some screen sessions running inside or outside of tmux.

    Each of those layers is its own implementation of the ANSI escape sequence state machine. For 24-bit color to work, every single layer has to understand and accurately translate the escape sequences from the inner TERM value’s terminfo to the outer terminfo.

    Therefore, you need recent-enough versions of all of this software. Current LTS Ubuntus only ship with mosh 1.3, so I had to enable the mosh-dev PPA.

    TERM must be set correctly within each terminal: tmux-direct within tmux, for example. There is no standard terminfo for mosh, so you have to pick something close enough.

    Graphical Terminal Emulators

    Most terminals either set TERM to a reasonable default or allow you to override TERM.

    I use Konsole, but I think you could find a similar option in whichever you use.

    Konsole's TERM value selection
    Konsole's TERM value selection

    ssh

    Often, the first thing I do when opening a terminal is to ssh somewhere else. Fortunately, this is easy, as long as the remote host has the same terminfo record. ssh carries your TERM value into the new shell.

    tmux

    But then you load tmux and TERM is set to screen! To fix this, override default-terminal in your ~/.tmux.conf:

    set -g default-terminal "tmux-direct"
    

    For extra credit, consider setting tmux-direct conditionally with %if when the outer TERM supports 24-bit color, otherwise leaving the default of screen or tmux-256color. And then let me know how you did it. :P

    mosh

    While recent mosh does support 24-bit color, it only advertises 8 or 256 colors. Thus, it’s up to you to set TERM appropriately.

    Mosh aims for xterm compatibility, but unfortunately only supports semicolon syntax for SGR 38 and 48, so TERM=xterm-direct does not work. So far, I’ve found that vscode-direct is the closest to xterm-direct.

    There is no convenient “I’m running in mosh” variable, so I wrote a detect-mosh.rs Rust script and called it from .bashrc:

    unamer=$(uname -r)
    unameo=$(uname -o)
    if [[ ! "$TMUX" ]]; then
        if [[ "$unamer" == *Microsoft ]]; then
            # WSL 1
            export TERM=vscode-direct
        elif [[ "$unameo" == Cygwin ]]; then
            # Eh, could just configure mintty to set mintty-direct.
            export TERM=vscode-direct
        elif detect-mosh 2>/dev/null; then
            # This should be xterm-direct, but mosh does not understand SGR
            # colon syntax.
            export TERM=vscode-direct
        fi
    fi
    

    It works by checking whether the shell process is a child of mosh-server.

    The jury’s still out on whether it’s a good idea to compile Rust in the critical path of login, especially into an underpowered host like my Intel Atom NAS or a Raspberry Pi.

    It Works!

    Beautiful Emacs themes everywhere!

    Emacs within tmux within mosh
    Emacs within tmux within mosh

    This was a ton of work, but I learned a lot, and, perhaps most importantly, I now feel confident I could debug any kind of wonky terminal behavior in the future.

    To recap:

    • Terminals don’t agree on syntax and capabilities.
    • Terminfo is how those capabilities are queried.
    • Terminfo is often limited, sometimes inaccurate, and new terminfo versions are released infrequently.

    What’s Next?

    If you were serious about writing software to take full advantage of modern terminal capabilities, it would be time to break from terminfo.

    I imagine such a project would look like this:

    • Continue to use the TERM variable because it’s well-supported.
    • Give programs knowledge of terminals independent of the age of the operating system or distribution they’re running on:
      • Programs would link with a frequently-updated (Rust?) library.
      • Said library would contain a (modern!) terminfo database representing, say, the last 10 years of terminal emulators, keyed on (name, version). Notably, the library would not pretend to support any hardware terminals, because they no longer exist. We can safely forget about padding, for example.
    • Continue to support the terminfo file format and OS-provided terminfo files on disk, with some protocol for determining which information is most-up-to-date.
    • Allow an opt-in TERMVERSION to differentiate between the capabilities of, for example, 2022’s Konsole and 2023’s Konsole.
    • Allow describing modern terminal capabilities (like 24-bit color, 256-color palette animation, URL links, Kitty’s graphics protocol) in an accurate, unambiguous format, independent of the timeline of new ncurses releases.
    • Backport modern terminal descriptions to legacy programs by providing a program to be run by .bashrc that:
      • Uses TERM and TERMVERSION to generate a binary terminfo file in $HOME/.terminfo/, which ncurses knows how to discover.
      • Generates unambiguous 24-bit color capabilities like RGB, setf24, and setb24, despite the fact that getting them added to terminfo has been politically untenable.
      • Otherwise, assumes RGB-unaware programs will assume the 256-color palette, and leaves colors#0x100, initc, oc in place. Palette animation is a useful, widely-supported feature.

    Let me know if you’re interested in such a project!

  • Reference Counting Things

    Reference counting is cheap and easy. An integer starts at one, increments on every new reference, and whoever decrements it to zero is responsible for deallocation.

    If references are shared across threads, increments and decrements must be atomic.

    Decades ago, I wrote an audio library that shipped in a couple commercial games. Things you’d find on CD in the bargain bin at Walmart. The ABI was modeled after COM and most objects were reference-counted. At the time I’d never seen a dual-CPU system, and thought inc [refcount] and dec [refcount] are single instructions. It will be fine, right?!

    Dual-core didn’t yet exist, but some people had dual-socket boards, and we started seeing crash reports after the CDs were burned… oops.

    (On the bright side, since I was religious about maintaining stable ABIs, users could just drop the fixed DLL into place.)

    Cost of Atomics

    Atomics are more expensive than non-atomic operations. inc is a handful of cycles. lock inc even uncontended, can be dozens.

    When C++ standardized std::shared_ptr in 2011 they didn’t even bother with a non-atomic version. C++ isn’t safe enough, and there was a feeling that atomic increments and decrements were common enough that they’d get optimized in hardware. That was correct – it just took a while.

    Rust’s safety guarantees, on the other hand, allow safe use of an unsynchronized Rc if you don’t want to pay for Arc.

    It’s pretty easy for reference counting overhead to show up in profiles. Sometimes it’s the accidental shared_ptr copy in a hot loop or a recursive .clone() in Rust. Last time I wrote Swift, atomic reference counts were a major cost.

    The hardware is getting better. On Apple Silicon and AMD Zen 3, uncontended atomic increments and decrements are almost as cheap as non-atomic. (Interestingly, atomics are also cheap on my 64-bit, 4-thread Intel Atom from 2011.) These optimizations are a big deal, and if all CPUs worked that way, maybe this blog post would end here.

    Alas, data centers are still filled with years-old Intel CPUs or non-Apple ARM implementation. It’s worth spending some time in software to avoid synchronization if possible.

    Avoid 0-to-1

    Here’s an easy but commonly-missed trick. Initialize your reference counts to 1.

    For whatever reason (symmetry?), it’s common to see implementations like:

    struct Object {
      std::atomic<size_t> count{0};
    };
    
    struct ObjectPtr {
      ObjectPtr(Object* p): p{p} {
        p->count.fetch_add(1, std::memory_order_relaxed);
      }
      Object* p;
    };
    

    I haven’t seen a compiler realize it can replace the initial value with 1 and avoid atomics when new objects are allocated.

    Avoid 1-to-0

    A typical release implementation is written:

    struct ObjectPtr {
      ~ObjectPtr() {
        if (1 == p->count.fetch_sub(1, std::memory_order_acq_rel)) {
          delete p;
        }
      }
      Object* p;
    }
    

    However, actually decrementing the count to zero is not necessary. We only need to know if we’re the last reference. Thus, we can write:

      ~ObjectPtr() {
        if (1 == p->count.load(std::memory_order_acquire) ||
            1 == p->count.fetch_sub(1, std::memory_order_acq_rel)) {
          delete p;
        }
      }
    

    Maybe the impact on code size isn’t worth it. That’s your call. On older Intel CPUs, in situations where most objects only have one reference, it can be a meaningful optimization.

    Maged Michael implemented a fancier version of this algorithm in gcc’s libstdc++.

    Implementing these two optimizations in Watchman was a material win for code that allocated or deallocated large trees.

    Biased Reference Counting

    Swift implicitly reference-counts many of its objects. When I worked at Dropbox, we measured reference counting operations as a substantial portion of our overall CPU time.

    In 2018, researchers at University of Illinois Urbana-Champaign proposed an algorithm called Biased Reference Counting that splits the reference count into two. One is biased to a specific thread and can be updated without atomic operations. The other reference count is atomic and shared among the remaining threads. Unifying these two counts requires extra bookkeeping, especially in languages like Swift or C++ where unsynchronized values can easily migrate across threads.

    The hybrid_rc Rust crate has an implementation of this algorithm that takes advantage of Rust’s type system (in particular, by not providing Send for thread-local references) to avoid extra bookkeeping.

    I’m curious if anyone uses biased reference counting in practice.

    Split Reference Counting

    Channel and promise implementations need to track two reference counts: one for readers and one for writers. When either reaches zero, the channel is closed. Waiting senders or receivers are notified that no more messages can be sent.

    Rust’s built-in channels use two atomic counters and an atomic bit. The bit is necessary to determine which thread should deallocate in the case that a thread drops the last reader exactly as another thread drops the last writer.

    It’s possible to pack all of these into a single 64-bit counter. If each half has 32 bits but the entire counter is updated atomically, no additional state is required to disambiguate who deallocates.

    I have a Rust implementation of the above in the splitrc crate.

    How Many Bits?

    Rust is sound: safe code must not have undefined behavior. std::mem::forget is a safe function. Therefore, it’s possible to run up some reference count p in a tight loop such as:

    loop {
      std::mem::forget(p.clone());
    }
    

    64-bit counters are effectively infinite. Let’s hypothesize a 4 GHz CPU where increments take one cycle. It would take almost 150 years to overflow.

    In contrast, a modern CPU can overflow a 32-bit counter in seconds. You might say (and I’d agree) that a program that holds billions of references is pathological, and need not be supported. On the other hand, in Rust, safe code must never overflow and cause use-after-free.

    Therefore, any 32-bit counter (even usize and AtomicUsize on 32-bit CPUs) must detect and handle overflow.

    Rc uses usize::wrapping_add to detect wraparound. Arc reserves half the range of usize to detect overflow. This is safe under the assumption that billions of threads aren’t simultaneously incrementing the counter.

    Rust reference counts typically abort on overflow rather than panic. I assume this is because panics can be caught and ignored. There may be codegen benefits as well. However, in the context of long-lived server processes that concurrently handle requests, it’s nice to catch panics and fail the one buggy request instead of aborting.

    splitrc allocates a panic range and an abort range to get the best of both worlds.

    In practice, overflowing a reference count should never happen. Reference counts should never get that high. But that sounds like famous last words, and I’ll happily pay a branch and some cold code for a loud failure.

    Older versions of the Linux kernel even had a use-after-free caused by reference count overflow.

    Weak References

    Like split reference counts, supporting weak references requires maintaining two counts: a strong count and a weak count. When the strong count reaches zero, the referenced value is destructed. But the memory can’t be deallocated until both counts reach zero.

    The approach taken by Rust’s Arc is to maintain two separate counters. All strong references share an extra weak reference. When the last strong reference is dropped, the extra weak reference is dropped too.

    Memory is deallocated when the last weak reference reaches zero.

    libc++ takes a similar approach with the interesting caveat that it starts counting at zero and waits until the counts decrement to -1.

    Supporting weak references has a small cost. You need space for two counters and some implementations actually perform two atomic decrements when the last strong reference is dropped.

    It’s possible to do better: like splitrc, the strong and weak references can be packed into a single 64-bit integer with overflow detection on each half. Each new reference is a single atomic addition. As in the 1-to-0 optimization above, an optimistic load can avoid an atomic RMW in the common case that no weak references are alive.

    If you don’t need weak references, the Rust triomphe crate provides some faster alternatives to the standard library.

    Count First Reference From Zero or One?

    It’s typical for reference counts to start at one and decrement to zero. But that’s not the only option. As mentioned above, libc++ initializes its references to value zero, meaning one reference. Decrement checks whether the count underflows to -1.

    Unless you’re the most standard of libraries, the tiny differences in instruction selection don’t matter. But they’re fun to look at, so let’s see. (Compiler Explorer)

    Initializing values to zero is smaller in most ISAs:

    struct RC {
        size_t s;
        size_t w;
    };
    
    void init_zero(RC& rc) {
        rc.s = 0;
        rc.w = 0;
    }
    
    void init_one(RC& rc) {
        rc.s = 1;
        rc.w = 1;
    }
    

    x86-64 (gcc 13.2):

    init_zero(RC&):
            pxor    xmm0, xmm0
            movups  XMMWORD PTR [rdi], xmm0
            ret
    init_one(RC&):
            movdqa  xmm0, XMMWORD PTR .LC0[rip]
            movups  XMMWORD PTR [rdi], xmm0
            ret
    

    gcc chooses to load the pair of ones from a 128-bit constant. clang instead generates two stores.

    x86-64 (clang 17):

    init_zero(RC&):                       # @init_zero(RC&)
            xorps   xmm0, xmm0
            movups  xmmword ptr [rdi], xmm0
            ret
    init_one(RC&):                        # @init_one(RC&)
            mov     qword ptr [rdi], 1
            mov     qword ptr [rdi + 8], 1
            ret
    

    ARM64 gcc generates equivalent code to x86-64. clang on ARM64 instead broadcasts a constant 1 into a vector and stores it.

    64-bit ARM (clang 17):

    init_zero(RC&):                       // @init_zero(RC&)
            stp     xzr, xzr, [x0]
            ret
    init_one(RC&):                        // @init_one(RC&)
            mov     w8, #1                          // =0x1
            dup     v0.2d, x8
            str     q0, [x0]
            ret
    

    As expected, zero-initialization is slightly cheaper.

    Increment will generate the same instructions no matter where the count starts, of course. (If using a 32-bit counter, overflow checks are required. Choosing an overflow range that allows branching on the sign bit can generate a smaller hot path, but that’s almost independent of where to start counting.)

    Decrement is a little interesting.

    void dec_zero_exact(std::atomic<size_t>& c) {
        if (0 == c.fetch_sub(1, std::memory_order_acq_rel)) {
            dealloc();
        }
    }
    
    void dec_zero_less(std::atomic<size_t>& c) {
        using ssize_t = std::make_signed_t<size_t>;
        if (0 >= static_cast<ssize_t>(c.fetch_sub(1, std::memory_order_acq_rel))) {
            dealloc();
        }
    }
    
    void dec_one(std::atomic<size_t>& c) {
        if (1 == c.fetch_sub(1, std::memory_order_acq_rel)) {
            dealloc();
        }
    }
    

    Let’s look at x86-64:

    dec_zero_exact(std::atomic<unsigned long>&):    # @dec_zero_exact(std::atomic<unsigned long>&)
            mov        rax, -1
            lock xadd  qword ptr [rdi], rax
            test       rax, rax
            je         dealloc()@PLT                # TAILCALL
            ret
    dec_zero_less(std::atomic<unsigned long>&):     # @dec_zero_less(std::atomic<unsigned long>&)
            lock dec  qword ptr [rdi]
            jl        dealloc()@PLT                 # TAILCALL
            ret
    dec_one(std::atomic<unsigned long>&):           # @dec_one(std::atomic<unsigned long>&)
            lock dec  qword ptr [rdi]
            je        dealloc()@PLT                 # TAILCALL
            ret
    

    There are two atomic decrement instructions, lock dec and lock xadd. lock dec is slightly preferable: it has a similar cost, but its latency is one cycle less on Zen 4, and it’s smaller. (lock xadd also requires loading -1 into a register.)

    But, since it doesn’t return the previous value and only sets flags, it can only be used if a following comparison can use those flags.

    Therefore, on x86-64, counting from 1 is slightly cheaper, at least with a naive comparison. However, if we sacrifice half the range of the counter type (again, two billion should be plenty), then we can get the same benefits in the counting-from-zero decrement.

    Now let’s take a look at ARM64:

    dec_zero_exact(std::atomic<unsigned long>&):        // @dec_zero_exact(std::atomic<unsigned long>&)
            mov     x8, #-1                         // =0xffffffffffffffff
            ldaddal x8, x8, [x0]
            cbz     x8, .LBB2_2
            ret
    .LBB2_2:
            b       dealloc()
    dec_zero_less(std::atomic<unsigned long>&):         // @dec_zero_less(std::atomic<unsigned long>&)
            mov     x8, #-1                         // =0xffffffffffffffff
            ldaddal x8, x8, [x0]
            cmp     x8, #0
            b.le    .LBB3_2
            ret
    .LBB3_2:
            b       dealloc()
    dec_one(std::atomic<unsigned long>&):                // @dec_one(std::atomic<unsigned long>&)
            mov     x8, #-1                         // =0xffffffffffffffff
            ldaddal x8, x8, [x0]
            cmp     x8, #1
            b.ne    .LBB4_2
            b       dealloc()
    .LBB4_2:
            ret
    

    None of the atomic read-modify-writes on ARM64 set flags, so the value has to be explicitly compared anyway. The only difference is that comparing equality with zero is one fewer instruction.

    So there we go. All of this instruction selection is likely in the wash. I was hoping for a Dijkstra-like clear winner. The strongest argument to start counts at 1 is that the counter never underflows, allowing multiple counts to be packed into a single integer.

    False Sharing

    Where the reference count is positioned in the object can matter. If the reference count ends up in the same page as other modified data, concurrent workloads are penalized. By reducing false sharing and making RCU more scalable, Intel improved highly concurrent network performance in Linux by 2-130%.

    There may be value in abstracting the count’s location through a vtable, like COM does.

    I’ll Stop Here

    Reference counting is a long-studied topic. There are saturating counts, counts that saturate into mark-and-sweep, counts that saturate into (logged) leaks, cycle detection, weighted reference counts, deferred increments, combining updates, external counts, but you can read more elsewhere.

    I mostly wanted to share some things I’ve recently run into.