There were some great comments on my introduction to ibb and they're worthy of discussion.

On the Performance of ReadDirectoryChangesW

Brandon Ehle is absolutely right when he says ReadDirectoryChangesW is not free. In fact, many modern applications monitor directory trees for changes, so, on a fast disk such as my Intel SSD, "svn up" might quickly create and destroy thousands of zero-byte lockfiles. When I have Visual Studio and ibb_server running, I have witnessed them peg a core each to handle the filesystem events.

While filesystem monitors consumes a great deal of CPU in aggregate, the important metric to consider is latency. That is, it doesn't actually matter that my CPU is pegged while I "svn up": the factor limiting my efficiency is the sheer number of system calls to the OS. Cores are cheap and getting cheaper, but latency in computing systems isn't improving. Put another way, each ReadDirectoryChangesW adds a small constant amount of work to what is already an O(files) process: the root problem is that svn up needs to create and destroy thousands of lockfiles. (I assume a small constant set of tools running at any given time. Assigning one core per tool seems realistic enough.)

Since common-case latency is the primary constraint we must optimize, converting O(n) to O(1) is a huge win. O(n) to O(k*n) for some small constant k isn't as important.

I might also argue that O(n) use cases should be eliminated if possible. "svn up" already knows the changeset from the server - if it knew the changeset on the client without locking and scanning every directory, it would become O(changeset + localchanges).

What if you stop ibb_server and rebuild?

Then the next build is O(n) like SCons or Make, but there's no reason it needs to be slower. Build signatures should definitely be cached in a local database so subsequent builds merely require scanning all files in the DAG. Recompiling from scratch upon every reboot would be a dealbreaker.