6 tips for writing robust, maintainable unit tests

Unit testing is one of the cornerstones of modern software development, but there’s a surprising lack of advice about how to write good unit tests. That’s a shame, because bad unit tests are worse than no unit tests at all. Over the past decade at Electric Cloud, I’ve written thousands of tests — a full build across all platforms runs over 100,000 tests. In this post I’ll share some tips for writing robust, maintainable tests. I learned these the hard way, but hopefully you can learn from my mistakes.

A key focus here is on eliminating so-called “flaky tests”: those that work almost all the time, but fail once-in-a-great-while for reasons unrelated to the code under test. Such unreliable tests erode confidence in the test suite and even in the value of unit testing itself. In the worst case, a history of failures due to flaky tests can cause people to ignore sporadic-but-genuine test failures, allowing rarely seen but legitimate bugs into the wild.

If that’s not enough to convince you to eliminate flaky tests, consider this: suppose each of your flaky tests fails just one time out of every 10,000 runs, and that you have a thousand such tests overall. At that rate, about 10% of your CI builds will fail due to flaky tests.

Now that I’ve got your attention, let’s see how to write better tests. Got some tips of your own? Add them in the comments!

1. Never use hardcoded network port numbers

The software I write involves network communication, which means that many of the tests create network sockets. In pseudo-code, a test for the server component looks something like this:

  1. Cleanup the previous server instance (if any).
  2. Instantiate the server on port 6003.
  3. Connect to the server via port 6003.
  4. Send some message to the server via the socket.
  5. Assert that the response received is correct.

At first glance this seems pretty safe: no standard service uses port number 6003, so there won’t be any contention with third parties for that port, and by front-loading the cleanup of the server we ensure that there won’t be any contention with our other tests for the port. And, of course, it (seems to) work! In fact, it probably works almost every time you run it. But rest assured: one day, seemingly at random, this test will fail.

For us the failures started after literally years and tens of thousands of executions. We never saw the same test fail twice, but the failure mode was always the same: “Only one usage of each socket address (protocol/network address/port) is normally permitted.” I wish I could say with certainty why the failures started happening — I suspect that our anti-virus software transiently grabs a dynamic port, and sometimes it just happens to grab the port that we planned to use.

Fortunately the solution is simple: instead of using a hardcoded port number, use a dynamic port assigned by the operating system. Usually that means binding to port number zero. This is slightly less convenient than a hardcoded port number, because you’ll have to query the socket after its bound to determine the actual port number, but that’s a small price to pay to be confident you’ll never have a spurious test failure due to this mistake.

2. Never use “sleep()” to synchronize test threads.

A common mistake when writing tests that involve multiple threads is to attempt to use sleep() to synchronize events in different threads. For example, you may have a server thread which opens a socket and waits for a connection. You want to test the behavior of the server thread, but you have to be sure that the socket is opened and ready to accept connections before you try to connect to it. A quick-and-dirty approach might look like this:

  1. Start server thread.
  2. Sleep a bit to give the thread time to get started.
  3. Open a connection to the server port.

There are some problems with this strategy. If you set the delay too short, occasionally the test will fail because the server socket won’t be ready — when you try to open the client-side connection you’ll get a connection refused error. Conversely, you can make the test fairly reliable by setting the delay very long — multiple seconds — but then the test will take at least that long to run, every time you run it.

Instead you should use a condition variable to synchronize the two threads. If that is not practical or not possible, a retry loop is a decent alternative. In that case, make sure that your loop doesn’t retry so frantically that it starves the other thread of CPU cycles to make progress. I like to use an exponential backoff, so initially I get a few retries at very short intervals, then the intervals become progressively longer to give the other thread more time to work:

1
2
3
4
5
6
7
delay = 20
total = 0
while total < 5000:
# break if socket can be opened, else sleep/retry
time.sleep(delay / 1000.0)
total += delay
delay += delay

3. Never rely on timing data to verify behavior.

Another mistake is using the observed duration of an operation to assert that a particular behavior has been implemented. For example, to test that a socket read operation times out if no data is received within a set period of time, you might write test code like this:

  1. Set socket timeout to 200ms
  2. Mark start time
  3. Attempt to read from socket
  4. Mark end time
  5. Assert that end minus start is between 200ms and 250ms

The problem is that your test runner might get put to sleep between step 2 and 3, and again between step 3 and step 4 — there’s just no way to control what else might be happening on the computer executing the tests. That means that the delta between the times could be significantly higher than the 200ms you expect. Depending on your hardware and timers, it could even be less than 200ms, despite your code being implemented correctly!

There are at least two robust alternatives to implement this test. The first is to add a counter to the code under test, to be incremented whenever the read operation times out. In the test you could check the value before and after the attempted read, and reasonably expect the value to increased by exactly one. The second approach is to forgo the explicit validation altogether — if the test completes, then the timeout must be operating correctly. If the test hangs, then the timeout must not be operating.

4. Never rely on implicit file timestamps.

If your software is sensitive to file timestamps, such as for determining whether file X is newer than file Y, then avoid trusting implicit timestamps in your tests. For example, you might write test code like this:

  1. Create “old” file Y.
  2. Create “new” file X.
  3. Verify that the unit under test behaves correctly given that X is newer than y.

Superficially this looks sound: X is created after Y, so it should have a timestamp later than Y. Unless, of course, it doesn’t, which can happen for a variety of odd reasons. For example, the system clock might get adjusted underneath your feet, due to NTP or another time-synchronization system. I’ve even seen cases where the file creations happen so quickly that the two files effectively have identical timestamps!

The fix is simple: explicitly set the timestamps on the files to ensure that the relationship is as expected. Generally you should specify a difference of at least 2 full seconds, to accommodate filesystems that have very low resolution timestamps. You should also avoid relying on subsecond timestamp resolution, again to accommodate filesystems that don’t support very high-resolution timestamps.

5. Include diagnostic information in test assertions.

You can save yourself a lot of trouble by arranging your test assertions so that they provide detailed information about any failures, rather than simple telling you yes or no, the assertion passed. For example, here’s an unhelpful assertion:

1
CPPUNIT_ASSERT(errors.empty());

If the errors variable contains error text, the assertion will fail — but you’ll have no idea what the errors were, and thus you’ll have no idea what went wrong in the test. In contrast, here’s an informative assertion:

1
CPPUNIT_ASSERT_EQUAL(string(""), errors);

Now if the assertion fails the test harness will show you value of errors, so you’ll have some useful information to start your debugging.

6. Include an explanation of the test in the comments.

Finally, don’t forget to put comments in your test code! Explain how the test works, and why you believe the test actually exercises the feature that you think it tests. It may seem a bit tedious, but remember that the rest of your team may not be as familiar with the code as you are, and they may not know what steps are needed to elicit a particular response from the code. For that matter, in a few months or years you may not remember how to test what you’re trying to test. Such comments are invaluable when updating tests after a refactoring, to understand how the test should be adjusted, as well as when debugging — to understand why a test failed to expose a defect. Here’s an example:

1
2
3
4
5
# Test methodology: create a Foo object, then try to set
# the Froznitz attribute to 5. This should produce an error
# because 5 is not a valid Froznitz value. See that the right
# type of exception is thrown, and that the error text is
# correct.

Summary

Everybody knows it’s important to write unit tests. Following the suggestions here will help make sure that your tests are reliable and maintainable. If you have tips of your own, add them in the comments!

Rapidly detecting Linux kernel source features for driver deployment

A while back I wrote about genconfig.sh, a technique for auto-detecting Linux kernel features, in order to improve portability of an open source, out-of-tree filesystem driver I developed as part of ElectricAccelerator. genconfig.sh has enabled me to maintain compatibility across a wide range of Linux kernels with relative ease, but recently I noticed that the script was unacceptably slow. On the virtual machines in our test lab, genconfig.sh required nearly 65 seconds to execute. For my 11-person team, a conservative estimate of time wasted waiting for genconfig.sh is nearly an entire person-month per year. With a little effort, I was able to reduce execution time to about 7 seconds, nearly 10x faster! Here’s how I did it.

A brief review of genconfig.sh

genconfig.sh is a technique for detecting the source code features of a Linux kernel. Like autoconf configure scripts, genconfig.sh uses a series of trivial C source files to probe for various kernel source features, like the presence or absence of the big kernel lock. For example, here’s the code used to determine whether the kernel has the set_nlink() helper function:

1
2
3
4
5
#include <linux/fs.h>
void dummy(struct inode *i)
{
set_nlink(i, 0);
}

If a particular test file compiles successfully, the feature is present; if the compile fails, the feature is absent. The test results are used to set a series of C preprocessor #define directives, which in turn are used to conditionally compile driver code suitable for the host kernel.

Reaching the breaking point

When I first implemented genconfig.sh in 2009 we only supported a few versions of the Linux kernel. Since then our support matrix has swollen to include every variant from 2.6.9 through 3.5.0, including quirky “enterprise” distributions that habitually backport advanced features without changing the reported kernel version. But platform support tends to be a mostly one-way street: once something is in the matrix, it’s very hard to pull it out. As a consequence, the number of feature tests in genconfig.sh has grown too, from about a dozen in the original implementation to over 50 in the latest version. Here’s a real-time recording of a recent version of genconfig.sh on one of the virtual machines in our test lab:

genconfig.sh executing on a test system; click for full size

Accelerator actually has two instances of genconfig.sh, one for each of the two kernel drivers we bundle, which means every time we install Accelerator we burn about 2 minutes waiting for genconfig.sh — 25% of the 8 minutes total it takes to run the install. All told I think a conservative estimate is that this costs my team nearly one full person-month of time per year, between time waiting for CI builds (which do automated installs), time waiting for manual installs (for testing and verification) and my own time spent waiting when I’m making updates to support new kernel versions.

genconfig.sh: The Next Generation

I had a hunch about the source of the performance problem: the Linux kernel build system. Remember, the pattern repeated for each feature test in the original genconfig.sh is as follows:

  1. Emit a simple C source file, called test.c.
  2. Invoke the kernel build system, using make and a trivial kernel module makefile:
    1
    2
    3
    conftest-objs := test.o
    obj-m := conftest.o
    EXTRA_CFLAGS += -Werror
  3. Check the exit status from make to decide whether the test succeeded or failed.

The C sources used to probe for features are trivial, so it seemed unlikely that the compilation itself would take so long. But we don’t have to speculate — if we use Electric Make instead of GNU make to run the test, we can use the annotated build log and ElectricInsight to see exactly what’s going on:

ElectricInsight visualization of Linux kernel module build, click for full size.

Overall, using the kernel build system to compile this single file takes nearly 2 seconds — not a big deal with only a few tests, but it adds up quickly. To be clear, the only part we actually care about is the box labeled /root/__conftest__/test.o, which is about 1/4 second. The remaining 1 1/2 seconds? Pure overhead. Perhaps most surprising is the amount of time burned just parsing the kernel makefiles — the huge bright cyan box on the left edge, as well as the smaller bright cyan boxes in the middle. Nearly 50% of the total time is just spent parsing!

At this point an idea struck me: there’s no particular reason genconfig.sh must invoke the kernel build system separately for each probe. Why not write out all the probe files upfront and invoke the kernel build system just once to compile them all in a single pass? In fact, with this strategy you can even use parallel make (eg, make -j 4) to eke out a little bit more speed.

Of course, you can’t use the exit status from make with this approach, since there’s only one invocation for many tests. Instead, genconfig.sh can give each test file a descriptive name, and then check for the existence of the corresponding .o file after make finishes. If the file is present, the feature is present; otherwise the feature is absent. Here’s the revised algorithm:

  1. Emit a distinct C source file for each kernel feature test. For example, the sample shown above might be created as set_nlink.c. Another might be write_begin.c.
  2. Invoke the kernel build system, using make -j 4 and a slightly less trivial kernel module makefile:
    1
    2
    3
    conftest-objs := set_nlink.o write_begin.o ...
    obj-m := conftest.o
    EXTRA_CFLAGS += -Werror
  3. Check for the existence of each .o file, using something like if [ -f set_nlink.o ] ; then … ; fi to decide whether the test succeeded or failed.

The net result? After an afternoon of refactoring, genconfig.sh now completes in about 7 seconds, nearly 10x faster than the original:

Updated genconfig.sh executing on a test system, click for full size.

The only drawback I can see is that the script no longer has that satisfying step-by-step output, instead dumping everything out at once after a brief pause. I’m perfectly happy to trade that for the improved performance!

Future Work and Availability

This new strategy has significantly improved the usability of genconfig.sh. After I finished the conversion, I wondered if the same technique could be applied to autoconf configure scripts. Unfortunately I find autoconf nearly impossible to work with, so I didn’t make much progress exploring that idea. Perhaps one of my more daring (or stubborn) readers will take the ball and run with it there. If you do, please comment below to let us know the results!

The new version of genconfig.sh is used in ElectricAccelerator 6.2, and can also be seen in the open source Loopback File System (lofs) on my Github repo.

Measuring the Electric File System

Somebody asked me the other day what portion of the Electric File System (EFS) is shared code versus platform-specific code. The EFS, if you don’t know, is a custom filesystem driver and part of ElectricAccelerator. It enables us to virtualize the filesystem, so that each build job sees the correct view of the filesystem according to its logical position in the build, even if jobs are run out of order. It also provides the file usage information that powers our conflict detection algorithm. As a filesystem driver, the EFS is obviously tightly coupled to the platforms it’s used on, and the effort of porting it is one reason we don’t support more platforms than we do now — not that a dozen variants of Windows, 16 flavors of Linux and several versions of Solaris is anything to be ashamed of!

Anyway, the question intrigued me, and I found the answer quite surprising (click for full-size):

Note that here I’m measuring only actual code lines, exclusive of comments and whitespace, as of the upcoming 6.1 release. In total, the Windows version of the EFS has about 2x the lines of code that either the Solaris or Linux ports have. The platform-specific portion of the code is more than 6x greater!

Why is the Windows port so much bigger? To answer that question, I started looking at the historical size of the three ports, which lead to the following graph showing the total lines of code for (almost) every release we’ve made. Unfortunately our first two releases, 1.0 and 2.0, have been lost to the ether at some point over the past ten years, but I was able to collect data for every release starting from 2.1 (click for full-size):

Here you can see that the Windows port has always been larger than the others, but it’s really just a handful of Windows-specific features that blew up the footprint so much. The first of those was support for the Windows FastIO filesystem interface, an alternative “fast path” through the kernel that in theory enables higher throughput to and from the filesystem. It took us two tries to get that feature implemented, as shown on the graph, and all-told that contributed about 7,000 lines of code. The addition of FastIO to the filesystem means that the Windows driver has essentially three I/O interfaces: standard I/O, memory-mapped I/O and FastIO. In comparison, on Linux and Solaris you have only two: standard I/O and memory-mapped I/O.

The second significant difference between the platforms is that on Windows the EFS has to virtualize the registry in addition to the filesystem. In the 4.3 release we significantly enhanced that portion of the driver to allow full versioning of the registry along the same lines that the filesystem has always done. That feature added about 1,000 lines of code.

I marked a couple other points of interest on this graph as well. First, the addition of the “lightweight EFS” feature, which is when we switched from using RAM to store the contents of all files in the EFS to using temporary storage on the local filesystem for large files. That enabled Accelerator to handle significantly larger files, but added a fair amount of code. Finally, in the most recent release you can actually see a small decrease in the total lines of code on Solaris and Linux, which reflects the long-overdue removal of code that was needed to support legacy versions of those platforms (such as the 2.4.x Linux kernel).

I was surprised as well by the low rate of growth in the Solaris and Linux ports. Originally the Linux port supported only one version of the Linux kernel, but today it supports about sixteen. I guess this result reveals that the difficulty in porting to each new Linux version is not so much the amount of code to be added, but in figuring out exactly which few lines to add!

In fact, after the 4.4 release in early 2009, the growth has been relatively slow on all platforms, despite the addition of major new features to Accelerator as a whole over the last several releases. The reason is simply that most feature development involves changes to other components (primarily emake), rather than to the filesystem driver.

One last metric to look at is the number of unit tests for the code in the EFS. We don’t practice test-driven development for the most part, but we do place a strong emphasis on unit testing. Developers are expected to write unit tests for every change, and we strive to keep the tests isomorphic to the code to help ensure we have good coverage. Here’s how the total number of unit tests has grown over the years (click for full-size):

Note that this is looking only at unit tests for the EFS! We have thousands more unit tests for the other components of Accelerator, as well as thousands of integration tests.

Thankfully for my credibility, the growth in number of tests roughly matches the growth in lines of code! But I’m surprised that the ratio of tests to code is not more consistent across the platforms — that suggests a possible area for improvement. Rather than being discouraged by that though, I’m pleased with this result. After all, what’s the point of measuring something if you don’t use that data to improve?

Makefile hacks: automatically split long command lines

If you’ve worked on a large build system you’ve probably bumped into this error, or one like this:

gmake: execvp: /bin/sh: Argument list too long

This error means the length of some command-line in your makefile has grown past the system limit, which is typically in the 32 to 256 kilobyte range. It’s surprisingly easy to hit that limit. You start with a small list of object files to be linked together. Over time you add more, and the command-line gets a little longer. Add a few more and it gets longer still. Before you know it you have a monster command-line and your build starts failing.

The solution to this problem is simple: split the long command-line into several shorter command-lines. For example, ar r libraries/lib.a objects/foo.o objects/bar.o objects/baz.o objects/boo.o objects/bang.o becomes something like this:

ar r libraries/lib.a objects/foo.o objects/bar.o
ar r libraries/lib.a objects/baz.o objects/boo.o
ar r libraries/lib.a objects/bang.o

Simple in theory, but tedious to do by hand. And doing it manually is like putting a ticking time-bomb into your makefile — it’s only a matter of time before your build grows enough that you have to go through this exercise again.

I recently ran across a clever solution that exploits the $(eval) function in GNU make to split long command-lines automatically, eliminating the tedium and the time-bomb. After I show you the solution, I’ll explain it piece-by-piece.

The max_args function

The solution is a user-defined function called max_args that splits long command-lines into equal-length chunks:

1
2
3
4
5
6
7
8
9
define max_args
$(eval _args:=)
$(foreach obj,$3,$(eval _args+=$(obj))$(if $(word $2,$(_args)),$1$(_args)$(EOL)$(eval _args:=)))
$(if $(_args),$1$(_args))
endef
define EOL
endef

And an example of its use:

1
2
3
OBJS:=a b c d e f g h
all:
@$(call max_args,echo,2,$(OBJS))

The max_args function takes three parameters: the base command-line, the number of arguments per “chunk”, and the complete list of arguments. It expands to a series of command-lines — one for each chunk of arguments.

The trick behind max_args is the use of $(eval) to update a variable as a side-effect of gmake’s regular variable expansion activity. If you’re not familiar with gmake variable expansion, here’s a quick rundown: when gmake finds a variable or function reference, like $(something), it replace the entire reference with an expanded value. In the case of a variable that’s just the value of the variable. Most variables in gmake are recursive which means that if the variable value itself contains embedded variable references, those will be expanded as well, recursively. In the case of a function, gmake evaluates the function, and replaces the reference with the computed value.

The meat of max_args is on line 3. It starts with the $(foreach) function, which evaluates its third argument, the body of the loop, once for each word in its second argument — in this case, the list of objects passed in the call to max_args.

In max_args, the loop body has two components. The first is a call to $(eval), which simply appends the current value of the loop variable to an accumulator called _args.

The second component of the loop body uses $(if) and $(word) to check the length of _args. The $(word) function returns the nth word from a list, or an empty string if there are fewer than n words in the list. The $(if) function expands its second argument (the then clause) only if its first argument (the condition) expands to a non-empty string, so together these functions check if _args has the desired number of words, and if so the then clause of the $(if) is expanded.

The then clause of this $(if) has two components. The first constructs a completed command-line by concatenating the base command-line, here given by $1, the first argument to the original max_args call; the accumulated arguments; and a newline character. Thanks to the rules of gmake expansion, this command-line is added to the overall expansion result for the max_args function. The second part of the then clause uses $(eval) to reset the accumulator

If the chunk size does not evenly divide the number of arguments, the stragglers are emitted in a final command-line on the last line of max_args.

Limitations

max_args is handy but it has one significant limitation: command-line length limits are based on the number of bytes in the command-line, not the number of words, in it. Unfortunately, gmake has no built-in way to count the number of characters in a string. gmake does provide the $(words) built-in, so that’s what max_args uses. That just means that to use it effectively you have to take a guess at the number of arguments that will fit in a single command-line, for example by dividing the length limit by the average number of characters in each argument, then subtracting some to allow some buffer for outliers.

6 reasons your development team should be using instant messaging

The ElectricAccelerator development team sits at desks less than 30 feet apart, but despite our close proximity, we don’t often speak to one another. To an outside observer this may seem to be a sign of disfunction in the team — after all, developers have to communicate to work effectively. Some people think we’re obviously not communicating, but the truth is that we’re not obviously communicating! That’s because we use instant messaging for most of our communications, including status updates, technical collaboration and even code reviews, rather than face-to-face conversations. I believe this has made my team more connected and more productive. Here are six reasons why instant messaging trumps face-to-face conversations for software teams.

1. Logging

The key advantage of instant messaging is that all conversations are logged automatically. As a result I’ve got records of every conversation with every member of my team for the past two years. That’s proven invaluable on a few occasions, to provide additional context for decisions made weeks or months earlier. Obviously this is not a replacement for other types of project documentation, but it is a fantastic supplement.

2. Non-intrusive

The second most important advantage of instant messaging is that it’s relatively non-intrusive, at least compared to a face-to-face conversation. We all know how important it is to get into and preserve a state of flow when programming. Spoken conversations, by social convention, command your immediate attention — effectively an interrupt of the highest order. When somebody comes to my desk to ask me something in person, they are implicitly saying, “What I have to say to you is more important than anything else you might be doing right now.” Sometimes that’s true, but many times it’s not. And yet every time somebody initiates a face-to-face conversation with me, it destroys whatever flow I might have developed.

In contrast, instant messaging allows me to defer a response until I reach a good breaking point, so people can ask questions without interrupting me.

3. Non-disruptive

Our office has an open floor plan, which means that instead of individual offices or cubicles, we have a single big room. This layout worked very well when the company had only 6 people, who were all working on the same project. Now the company employs over 100 people, with two separate development teams working on completely different products, so the open layout doesn’t work quite so well. Conversations between other people can be very distracting when you’re heads down on a tricky technical problem. By using instant messaging instead of face-to-face conversations, we significantly reduce the distraction for our collegues.

4. Simultaneous conversations

Carrying on multiple face-to-face conversations on disparate topics is practically impossible, but doing the same via instant messenger is simple. Every IM client I’ve seen displays the last several messages of each active conversation, so you have context when a new message arrives. That signficiantly reduces the mental burden associated with each conversation, so it becomes possible to sustain several simultaneously. I often have five conversations “active” during the work day, and sometimes even more.

5. Consistency

Unlike face-to-face conversations, IM works well regardless of the relative locations of the conversants. That means that it doesn’t matter if my colleague is in the office with me, or working from home, or working from a customer site, or halfway around the world. I can use the same tool to communicate with them, which in turn means I don’t have to change the way I work to accomodate changes in the way they are working.

6. Versatility

One final advantage of instant messaging compared to face-to-face conversation is the versatility of the medium. I can trivially share a code fragment with somebody via IM, or a link to an online resource. Try doing that in a face-to-face conversation: “Yeah, you should check out the STL reference docs, at aich tee tee pee colon slash slash double you double you double you dot …”.

Instant messaging: give it a try

If you’re not already using instant messaging in your development team, give it a try. There are multiple free IM services out there, and there are good free IM clients on every platform, including smart phones, so you’ve really got nothing to lose — but you might gain a more efficient, productive team. It worked for us.

Why is SCons so slow?

UPDATE: If you’re coming from Why SCons is not slow, you should read my response

A while back, I did a series of posts exploring the performance of SCons on builds of various sizes. The results were dismal: SCons demonstrated a classic O(n2) growth in runtime, meaning that the length of the build grew in proportion to the square of the number of files in the build, rather than linearly as one would hope. Naturally, that investigation and its results provoked a great deal of discussion at the time and since. Typically, SCons advocates fall back on one particular argument: “Sure, SCons may be slow,” they say, “but that’s the price you pay for a correct build.” Recently, Eric S. Raymond wrote an article espousing this same fundamental argument, with the addition of some algorithmic analysis intended to prove mathematically that a correct build, regardless of the build tool, must necessarily exhibit O(n2) behavior — a clever bit of circular logic, because it implies that any build tool that does not have such abyssmal performance must not produce correct builds!

Naturally, after spending nearly a decade developing a high-performance replacement for GNU make, I couldn’t let that statement stand. This post is probably going to be on the long side, so here’s the tl;dr summary:

  • You can guarantee correct builds with make, provided you follow best practices.
  • The worst-case runtime of any build tool if, of course, O(n2), but most, if not all, builds can be handled in O(n) time, without sacrificing correctness.
  • SCons’ performance problem is caused by design and implementation decisions in SCons, not some pathology of build structure.

What is required to ensure a correct build?

One of the fundamental tenents of the pro-SCons mythos is the idea that it is unique in its ability to guarantee correct builds. In reality, SCons is not doing anything particularly special in this regard. It’s true that by virtue of its design SCons makes it easier to get it right, but there’s nothing keeping you from enjoying the same assurances in make.

First: what is a correct build? Simply put, a correct build is one in which everything that ought to be built, is built. Note that by definition, a from-scratch build is correct, since everything is built in that case. So the question of “correct” or “incorrect” is really only relevant in regards to incremental builds.

So, what do we need in order to ensure a correct incremental build? Only three things, actually:

  1. A single, full-build dependency graph.
  2. Complete dependency information for every generated file.
  3. A reliable way to determine if a file is up-to-date relative to its inputs.

What SCons has done is made it more-or-less impossible, by design, to not have these three things. There is no concept like recursive make in the SCons world, so the only option is a single, full-build dependency graph. Likewise, SCons automatically scans input files in several programming languages to find dependency information. Finally, SCons uses MD5 checksums for the up-to-date check, which is a pretty darn reliable way to verify whether a given file needs to be rebuilt.

But the truth is, you can guarantee correct builds with make — you just have to adhere to long-standing best practices for make. First, you have to avoid using recursive make. Then, you need to add automatic dependency generation. The only thing that’s a little tricky is the up-to-date check: make is hardwired to use file timestamps, which can be spoofed, deliberately or accidentally — although to be fair, in most cases, timestamps are perfectly adequate. But even here, there’s a way out. You can use a smarter version of make that has a more sophisticated up-to-date mechanism, like ElectricMake or ClearMake. You can even shoehorn MD5 checksums into GNU make, if you like.

I can’t deny that SCons has made it easier to get correct builds. But the notion that it can’t be done with make is simply absurd.

What is the cost of a correct build?

Now we turn to the question of the cost of ensuring correctness. At its core, any build tool is just a collection of graph algorithms — first contructing the dependency graph, then traversing it to find and update out-of-date files. These algorithms have well-understood complexity, typically given as O(n + e), where n is the number of nodes in the graph, and e is the number of edges. It turns out that e is actually the dominant factor here, since it is at least equal to n, and at worst as much as n2. That means we can simplify the complexity to O(n + n2), or just O(n2).

Does this absolve SCons of its performance sins? Unfortunately it does not, because O(n2) is a worst-case bound — you should only expect O(n2) behavior if you’ve got a build that has dependencies between every pair of files. Think about that for a second. A dependency between every. pair. of. files. Here’s what that would look like in makefile syntax:

all: foo bar foo.c bar.c foo.h bar.h
foo:     bar foo.c bar.c foo.h bar.h
bar:         foo.c bar.c foo.h bar.h
foo.c:             bar.c foo.h bar.h
bar.c:                   foo.h bar.h
foo.h:                         bar.h

It’s ridiculous, right? I don’t know about you, but I’ve certainly never seen a build that does anything even remotely like that. In particular, the builds I used in my benchmarks don’t look like that. Fortunately, those builds are small and simple enough that we can directly count the number of edges in the dependency graph. For example, the smallest build in my tests consisted of:

2,000 C sources
+ 2,004 headers
+ 2,000 objects
+ 101 libraries
+ 100 executables

6,205 total files

So we have about 6,000 nodes in the graph, but how many edges does the graph contain? Lucky for us, SCons will print the complete dependency graph if we invoke it with scons –tree=all:

+-.
  +-SConstruct
  +-d1_0
  | +-d1_0/SConstruct
  | +-d1_0/f00000_sconsbld_d1_0
  | | +-d1_0/f00000_sconsbld_d1_0.o
  | | | +-d1_0/f00000_sconsbld_d1_0.c
  | | | +-d1_0/lup001_sconsbld_d1_0/f00000_sconsbld_d1_0.h
  ...

The raw listing contains about 35,000 lines of text, but that includes duplicates and non-dependency information like filesystem structure. Filter that stuff out and you can see the graph contains only about 12,000 dependencies. That’s a far cry from the 1,800,000 or so you would expect if this truly were a “worst-case” build. It’s clear, in fact, that the number of edges is best described as O(n).

Although I don’t know how (or even if it’s possible) to prove that this is the general case, it does make a certain intuitive sense: far from being strongly-connected, most of the nodes in a build dependency graph have just one or two edges. Each C source file, for example, has just one outgoing edge, to the object file generated from that source. Each object file has just one outgoing edge too, to the library or executable the object is part of. Sure, libraries and headers probably have more edges, since they are used by multiple executables or objects, but the majority of the stuff in the graph is going to fall into the “small handful of edges” category.

Now, here’s the $64,000 question: if the algorithms in a build tool scale in proportion to the number of edges in the dependency graph, and we’ve just shown that the dependency graph in question has O(n) edges, why does SCons use O(n2) time to execute the build?

Why is SCons so slow?

SCons’ O(n2) performance stems from its graph traversal implementation. Essentially, SCons scans the entire dependency graph each time it is looking for a file to update. n scans of a graph with O(n) nodes and edges equals an O(n2) graph traversal. There’s no mystery here. In fact, the SCons developers are clearly aware of this deficiency, as described on their wiki:

It’s worth noting that the Jobs module calls the Taskmaster once for each node to be processed (i.e., it’s O(n)) and the Taskmaster has an amortized performance of O(n) each time it’s called. Thus, the overall time is O(n^2).

But despite recognizing this flaw, they severely misjudged its impact, because they go on to state that it requires a “pathological” dependency graph in order to elicit this worst-case behavior from SCons. As we’ve shown here and in previous posts, even a terribly mundane dependency graph elicits O(n2) behavior from SCons. I shudder to think what SCons would do with a truly pathological dependency graph!

Obviously the next question is: why does SCons do this? That’s not quite as easy for me to explain, as an outside observer. To the best of my understanding, they rescan the graph just in case new dependencies are added to the dependency graph while evaluating a node in the graph — remember, in SCons the commands to update a file are expressed in Python, so they can easily manipulate the dependency graph even while the build is running.

Is it really necessary to rescan the dependency graph over and over? I don’t think so. In fact, make is proof that it is not necessary. I think there are two ways that SCons could address this problem: first, it could adopt GNU make’s convention of partitioning the build into distinct phases, one that updates dependency information, and a second that actually executes the build. In GNU make, that strategy allows for the introduction of new dependency information, while imposing only a one-time O(n) cost for restarting the make process if any new dependencies are found.

Alternatively, SCons could probably be made smarter about when a full rescan is required. Most of the time, even if new dependencies are added to the graph, they are added to the node being evaluated, not to nodes that were already visited. That is, when you scan a source file for implicit dependencies, you find the dependencies for that file not for other files in the build (duh). So most of the time, a full rescan is massive overkill.

The final word…?

Hopefully this is my last post on the subject of SCons performance. It is clear to me that SCons does not scale to large projects, and that the problem stems from design and implementation decisions in SCons, rather than some pathology in the build itself. You can get comparable guarantees of correctness from make, if you’re willing to invest the time to do things the right way. The payoff is a build system that is not only correct but has vastly better performance than SCons as your project grows. Why wouldn’t you want that?

HOWTO: ship a custom kernel driver for Linux

Pop quiz, hotshot: your company has developed a Linux kernel driver as part of its product offering. How do you deliver this driver such that your product is compatible with a significant majority of the Linux variants you are likely to encounter in the field? Consider the following:

  • RedHat Enterprise Linux 4 is based on kernel version 2.6.9
  • RHEL 5 is based on kernel version 2.6.18
  • RHEL 6 is based on 2.6.32
  • openSUSE 11.0 is based on 2.6.25
  • openSUSE 11.1 is based on 2.6.27
  • Ubuntu 9.04 is based on 2.6.28
  • Ubuntu 9.10 is based on 2.6.31
  • Ubuntu 10.04 is based on 2.6.32
  • Ubuntu 10.10 is based on 2.6.35

I could go on, but hopefully you get the point — “Linux” is not a single, identifiable entity, but rather a collection of related operating systems. And thus the question: how do you ship your driver such that you can install and use it on a broad spectrum of Linux variants? This is a problem that I’ve had to solve in my work.

Fundamentally, the solution is simple: ship the driver in source form. But that answer isn’t much help unless you can make your driver source-compatible with a wide range of kernel versions, spanning several years of Linux development. The solution to that problem is simple too, in hindsight, and yet I haven’t seen it used or described elsewhere: test for specific kernel features using something like a configure script; set preprocessor macros based on the results of the tests; and use the macros in the driver source to conditionally include code as needed. But before I get into the details of this solution, let’s look briefly at a few alternative solutions and why each was rejected.

Rejected alternatives: how NOT to ship a custom driver for Linux

Based on my informal survey of the state-of-the-art in this field, it seems there are three common approaches to solving this problem:

  1. Arrange for your driver to be bundled with the Linux kernel. If you can pull this off, fantastic! You’ve just outsourced the effort of porting your driver to the people who build and distribute the kernel. Unfortunately, kernel developers are not keen on bundling drivers that are not generally useful — that is, your driver has to have some utility outside of your specific application, or you can forget getting it bundled into the official kernel. Also, if you have any interesting IP in your driver, open-sourcing it is probably not an option.
  2. Prebuild your driver for every conceivable Linux variant. If you know which Linux variants your product will support, you could build the driver for each, then choose one of the prebuilt modules at installation time based on the information in /etc/issue and uname -r. VMWare uses this strategy — after installing VMWare Workstation, take a look in /usr/lib/vmware/modules/binary: you’ll find about a hundred different builds of their kernel modules, for various combinations of kernel versions, distributions and SMP-status. The trouble with this strategy is that it adds significant complexity to your build and release process: you need a build environment for every one of those variants. And all those modules bloat your install bundle. Finally, no matter how many distro’s you prebuild for, it will never be enough: somebody will come along and insist that your code install on their favorite variant.
  3. Ship source that uses the LINUX_VERSION_CODE and KERNEL_VERSION macros. These macros, defined by the Linux kernel build system, allow you to conditionally include code based on the version of the kernel being built. In theory this is all you need, if you know which version introduced a particular feature. But there are two big problems. First, you probably don’t know exactly which version introduced each feature. You could figure it out with some detective work, but who’s got the time to do that? Second, and far more troublesome, most enterprise Linux distributions (RHEL, SUSE, etc.) backport features and fixes from later kernels to their base kernel — without changing the value of LINUX_VERSION_CODE. Of course that renders this mechanism useless.

genconfig.sh: a configure script for kernel modules

Conceptually, genconfig.sh works the same way as an autoconf configure script: it uses a series of trivial test programs to check for different kernel features or constructs. The success or failure of each test to compile determines whether the corresponding feature is present, and by extension whether or not a particular bit of code ought to be included in the driver.

For example, in some versions of the Linux kernel (2.6.9, eg), struct inode includes a member called i_blksize. If present, this field should be set to the blocksize of the filesystem that owns the inode. It’s used in the implementation of the stat(2) system call. It’s a minor detail, but if you’re implementing a filesystem driver, it’s important to get it right.

We can determine whether or not to include code for this field by trying to compile a trivial kernel module containing just this code:

#include <linux/fs.h>
void dummy(void)
{
    struct inode i;
    i.i_blksize = 0;
    return;
}

If this code compiles, then we know to include code for managing the i_blksize field. We can create a header file containing a #define corresponding to this knowledge:

#define HAVE_INODE_I_BLKSIZE

Finally, the driver code uses that definition:

#ifdef HAVE_INODE_I_BLKSIZE
  inode->i_blksize = FS_BLOCKSIZE;
#endif

We can construct an equally trivial test case for each feature that is relevant to our driver. In the end we get a header with a series of defines, something like this:

#define HAVE_INODE_I_BLKSIZE
#define HAVE_3_ARG_INT_POSIX_TEST_LOCK
#define HAVE_KMEM_CACHE_T
#define HAVE_MODE_IN_VFS_SYMLINK
#define HAVE_3_ARG_PERMISSION
#define HAVE_2_ARG_UMOUNT_BEGIN
#define HAVE_PUT_INODE
#define HAVE_CLEANUP_IN_KMEM_CACHE_CREATE
#define HAVE_WRITE_BEGIN
#define HAVE_ADDRESS_SPACE_OPS_EXT
#define HAVE_SENDFILE
#define HAVE_DENTRY_IN_FSYNC

By referencing these definitions in the driver source code, we can make it source-compatible with a wide range of Linux kernel versions. To add support for a new kernel, we just have to determine which changes affect our module, write tests to check for those features, and update only the affected parts of our driver source.

This is more nimble, and far more manageable, than shipping prebuilt binaries for an endless litany of kernel variants. And it’s much more robust than relying on LINUX_VERSION_CODE: rather than implicitly trusting that a feature is present or absent based on an unreliable version string, we know for certain whether that feature is present, because we explicitly tried to use it.

Belt and suspenders: ensuring the driver works correctly

Now we have a strategy for shipping a driver that will build and load on a broad array of Linux variants. But this approach has introduced a new problem: how can we be sure that this driver that was just auto-configured and compiled on-the-fly will actually work as expected?

The solution to this problem has two components. First, we identified about a dozen specific Linux variants that are critical to our customers. The driver is exhaustively tested on each of these “tier 1” variants in every continuous integration build — over 3,000 automated unit tests are run against the driver on each. Of course, 12 variants is only a tiny fraction of the thousands of permutations that are possible, but by definition these variants represent the most important permutations to get right. We will know immediately if something has broken the driver on one of these variants.

Next, we ship a stripped down version of that unit test suite, and execute that automatically when the driver is built. This suite has only about 25 tests, but those tests cover every major piece of functionality — a reasonable compromise between coverage and simplicity. With this install-time test suite, we’ll know if there’s a problem with the driver on a particular platform as soon as somebody tries to install it.

Demonstration code

For demonstration purposes I have placed a trivial filesystem driver on my github repo. This driver, base0fs, was generated using the FiST filesystem generator, patched to make use of the genconfig.sh concept.

Eternal vigilance and multithreaded programming

Given the following:

  • A multithreaded application written in C++, using the STL.
  • A class Mutex, which is a thin wrapper around a pthread_mutex_t.
  • A class Lock, which implements the RAII pattern for acquiring and releasing a Mutex.
  • A class FileInfo, which stores information about a file. The getSize() method retrieves the current size of the file. Because another thread may be changing the size of the file, getSize() is internally locked.
  • A class Cache, which manages a map of filenames to FileInfo objects.

Can you spot the bug in this bit of code?

class Cache {
protected:
    typedef std::map<string, FileInfo *> map_type;

    Mutex mLock;
    map_type mMap;

public:
    void insert(const string& filename, FileInfo *info)
    {
        // Add a new file to the cache.

        Lock lock(mLock);
        mMap.insert(make_pair(filename, info));
    }

    void invalidate(const string& filename)
    {
        // Eject a file from the cache; this only
        // invalidates the cache entry, it does not
        // delete the associated FileInfo.

        Lock lock(mLock);
        map_type::iterator i = mMap.find(filename);
        if (i != mMap.end()) {
            mMap.erase(i);
        }
    }

    int64_t getSize(const string& filename)
    {
        // Get the current size of a file in the cache.

        map_type::iterator i;

        {
            // Grab the cache lock for the lookup.

            Lock lock(mLock);
            i = mMap.find(filename);
            if (i == mMap.end()) {
                return 0;
            }
        } // release cache lock

        // Relay the getSize() request to the FileInfo
        // object; this must happen <i>after</i> releasing
        // the cache lock, because FileInfo::getSize() 
        // can block.

        return i->second->getSize();
    }
};

Even if you’re familiar with the STL and multithreaded programming, the defect can be hard to spot. In fact, this one escaped my notice until a recent mysterious core dump alerted me to its presence, despite having read this code several times for other reasons.

Here’s the bug: in Cache::getSize(), the iterator i is accessed outside the protection of the Cache lock. You may think that this is safe, since the lookup occurs inside the lock. But suppose we have two threads running simultaneously, one calling getSize(“foo”), and the other calling invalidate(“foo”):

Thread A: getSize(“foo”) Thread B: invalidate(“foo”)
{
    Lock lock(mLock);
    i = mMap.find(filename);
    if (i == mMap.end()) {
       return 0;
    }
} // release cache lock
Lock lock(mLock);
i = mMap.find(filename);
if (i != mMap.end()) {
    mMap.erase(i);
}
return i->second->getSize();

By the time Thread A uses the iterator returned by map::find(), the call to invalidate() has, well, invalidated it. I suspect that the author of this code remembered that map::erase() invalidates the iterator used in the erase() itself, but not the corollary that map::erase() also invalidates any other iterator that references the same element.

If you think about how std::map is implemented, it’s obvious that erase() should have this property. Internally, std::map usually uses some type of balanced tree. With that foundation, the simplest implementation of std::map::iterator is just a thin wrapper around a pointer to a node in the tree. map::erase() must delete that node (otherwise, when would it be deleted?), so accessing an iterator after it has been erase‘d is literally a free memory read, an obvious memory management error.

In this case, the solution is simple: just extract the FileInfo pointer from the iterator before releasing the Cache lock, as follows:

    int64_t getSize(const string& filename)
    {
        // Get the current size of a file in the cache.

        FileInfo *info;

        {
            // Grab the cache lock for the lookup.

            Lock lock(mLock);
            map_type::iterator i = mMap.find(filename);
            if (i == mMap.end()) {
                return 0;
            }
            info = i->second;
        } // release cache lock

        // Relay the getSize() request to the FileInfo
        // object; this must happen <i>after</i> releasing
        // the cache lock, because FileInfo::getSize() 
        // can block.

        return info->getSize();
    }

Eternal vigilance

The interesting thing about this bug is that it would have been nearly impossible to detect with testing alone. This is why code reviews are so important in our profession, and why projects that really demand bug-free code rely so heavily on them. As they say, “Eternal vigilance is the price of multithreaded programming.” With that, I’m off to review some more code. Shouldn’t you?