6 tips for writing robust, maintainable unit tests

Unit testing is one of the cornerstones of modern software development, but there’s a surprising lack of advice about how to write good unit tests. That’s a shame, because bad unit tests are worse than no unit tests at all. Over the past decade at Electric Cloud, I’ve written thousands of tests — a full build across all platforms runs over 100,000 tests. In this post I’ll share some tips for writing robust, maintainable tests. I learned these the hard way, but hopefully you can learn from my mistakes.

A key focus here is on eliminating so-called “flaky tests”: those that work almost all the time, but fail once-in-a-great-while for reasons unrelated to the code under test. Such unreliable tests erode confidence in the test suite and even in the value of unit testing itself. In the worst case, a history of failures due to flaky tests can cause people to ignore sporadic-but-genuine test failures, allowing rarely seen but legitimate bugs into the wild.

If that’s not enough to convince you to eliminate flaky tests, consider this: suppose each of your flaky tests fails just one time out of every 10,000 runs, and that you have a thousand such tests overall. At that rate, about 10% of your CI builds will fail due to flaky tests.

Now that I’ve got your attention, let’s see how to write better tests. Got some tips of your own? Add them in the comments!

1. Never use hardcoded network port numbers

The software I write involves network communication, which means that many of the tests create network sockets. In pseudo-code, a test for the server component looks something like this:

  1. Cleanup the previous server instance (if any).
  2. Instantiate the server on port 6003.
  3. Connect to the server via port 6003.
  4. Send some message to the server via the socket.
  5. Assert that the response received is correct.

At first glance this seems pretty safe: no standard service uses port number 6003, so there won’t be any contention with third parties for that port, and by front-loading the cleanup of the server we ensure that there won’t be any contention with our other tests for the port. And, of course, it (seems to) work! In fact, it probably works almost every time you run it. But rest assured: one day, seemingly at random, this test will fail.

For us the failures started after literally years and tens of thousands of executions. We never saw the same test fail twice, but the failure mode was always the same: “Only one usage of each socket address (protocol/network address/port) is normally permitted.” I wish I could say with certainty why the failures started happening — I suspect that our anti-virus software transiently grabs a dynamic port, and sometimes it just happens to grab the port that we planned to use.

Fortunately the solution is simple: instead of using a hardcoded port number, use a dynamic port assigned by the operating system. Usually that means binding to port number zero. This is slightly less convenient than a hardcoded port number, because you’ll have to query the socket after its bound to determine the actual port number, but that’s a small price to pay to be confident you’ll never have a spurious test failure due to this mistake.

2. Never use “sleep()” to synchronize test threads.

A common mistake when writing tests that involve multiple threads is to attempt to use sleep() to synchronize events in different threads. For example, you may have a server thread which opens a socket and waits for a connection. You want to test the behavior of the server thread, but you have to be sure that the socket is opened and ready to accept connections before you try to connect to it. A quick-and-dirty approach might look like this:

  1. Start server thread.
  2. Sleep a bit to give the thread time to get started.
  3. Open a connection to the server port.

There are some problems with this strategy. If you set the delay too short, occasionally the test will fail because the server socket won’t be ready — when you try to open the client-side connection you’ll get a connection refused error. Conversely, you can make the test fairly reliable by setting the delay very long — multiple seconds — but then the test will take at least that long to run, every time you run it.

Instead you should use a condition variable to synchronize the two threads. If that is not practical or not possible, a retry loop is a decent alternative. In that case, make sure that your loop doesn’t retry so frantically that it starves the other thread of CPU cycles to make progress. I like to use an exponential backoff, so initially I get a few retries at very short intervals, then the intervals become progressively longer to give the other thread more time to work:

1
2
3
4
5
6
7
delay = 20
total = 0
while total < 5000:
# break if socket can be opened, else sleep/retry
time.sleep(delay / 1000.0)
total += delay
delay += delay

3. Never rely on timing data to verify behavior.

Another mistake is using the observed duration of an operation to assert that a particular behavior has been implemented. For example, to test that a socket read operation times out if no data is received within a set period of time, you might write test code like this:

  1. Set socket timeout to 200ms
  2. Mark start time
  3. Attempt to read from socket
  4. Mark end time
  5. Assert that end minus start is between 200ms and 250ms

The problem is that your test runner might get put to sleep between step 2 and 3, and again between step 3 and step 4 — there’s just no way to control what else might be happening on the computer executing the tests. That means that the delta between the times could be significantly higher than the 200ms you expect. Depending on your hardware and timers, it could even be less than 200ms, despite your code being implemented correctly!

There are at least two robust alternatives to implement this test. The first is to add a counter to the code under test, to be incremented whenever the read operation times out. In the test you could check the value before and after the attempted read, and reasonably expect the value to increased by exactly one. The second approach is to forgo the explicit validation altogether — if the test completes, then the timeout must be operating correctly. If the test hangs, then the timeout must not be operating.

4. Never rely on implicit file timestamps.

If your software is sensitive to file timestamps, such as for determining whether file X is newer than file Y, then avoid trusting implicit timestamps in your tests. For example, you might write test code like this:

  1. Create “old” file Y.
  2. Create “new” file X.
  3. Verify that the unit under test behaves correctly given that X is newer than y.

Superficially this looks sound: X is created after Y, so it should have a timestamp later than Y. Unless, of course, it doesn’t, which can happen for a variety of odd reasons. For example, the system clock might get adjusted underneath your feet, due to NTP or another time-synchronization system. I’ve even seen cases where the file creations happen so quickly that the two files effectively have identical timestamps!

The fix is simple: explicitly set the timestamps on the files to ensure that the relationship is as expected. Generally you should specify a difference of at least 2 full seconds, to accommodate filesystems that have very low resolution timestamps. You should also avoid relying on subsecond timestamp resolution, again to accommodate filesystems that don’t support very high-resolution timestamps.

5. Include diagnostic information in test assertions.

You can save yourself a lot of trouble by arranging your test assertions so that they provide detailed information about any failures, rather than simple telling you yes or no, the assertion passed. For example, here’s an unhelpful assertion:

1
CPPUNIT_ASSERT(errors.empty());

If the errors variable contains error text, the assertion will fail — but you’ll have no idea what the errors were, and thus you’ll have no idea what went wrong in the test. In contrast, here’s an informative assertion:

1
CPPUNIT_ASSERT_EQUAL(string(""), errors);

Now if the assertion fails the test harness will show you value of errors, so you’ll have some useful information to start your debugging.

6. Include an explanation of the test in the comments.

Finally, don’t forget to put comments in your test code! Explain how the test works, and why you believe the test actually exercises the feature that you think it tests. It may seem a bit tedious, but remember that the rest of your team may not be as familiar with the code as you are, and they may not know what steps are needed to elicit a particular response from the code. For that matter, in a few months or years you may not remember how to test what you’re trying to test. Such comments are invaluable when updating tests after a refactoring, to understand how the test should be adjusted, as well as when debugging — to understand why a test failed to expose a defect. Here’s an example:

1
2
3
4
5
# Test methodology: create a Foo object, then try to set
# the Froznitz attribute to 5. This should produce an error
# because 5 is not a valid Froznitz value. See that the right
# type of exception is thrown, and that the error text is
# correct.

Summary

Everybody knows it’s important to write unit tests. Following the suggestions here will help make sure that your tests are reliable and maintainable. If you have tips of your own, add them in the comments!

The ElectricAccelerator 6.2 “Ship It!” Award

Obviously with ElectricAccelerator 6.2 out the door, it’s time for a new “Ship It!” award. I picked the mechanic figure for this release because the main thrust of the release was to add some long-desired robustness improvements. Here’s the trading card that accompanied the figure:

Greased Lighting — it’s electrifyin’!

Loaded with metrics and analysis goodness!

As promised this iteration of the award includes some metrics comparing this release to previous releases:

  • Number of days in development. We spent 112 days working on the 6.2 release; the range of all feature releases is 80 days on the low end to 249 days at the high end.
  • JIRA issues closed. We closed out 92 issues with this release, including both defects and enhancement requests. The fewest we’ve done was 9 issues; the most was 740 issues.
  • Composition of issues. Of the 92 issues, about 55% were classified as “defects”, and the remaining 45% were “features” of varying magnitude.

Again, a big “Thank You!” goes out to the ElectricAccelerator team! I’m really excited to be working with such a talented group, and I can’t wait to show the world what we’re doing next!

The ElectricAccelerator 6.1 “Ship It!” Award

Having shipped ElectricAccelerator 6.1, I thought you might like to see the LEGO-based “Ship It!” award that I gave each member of the development team. I started this tradition with the 6.0 release last fall. Here’s the baseball card that accompanied the detective minifig I chose for this release:

The great detective is on the case!

The Accelerator 6.1 team

I picked the detective minifig for the 6.1 release in recognition of the significant improvements to Accelerator’s diagnostic capabilities (like cyclic redundancy checks to detect faulty networks, and MD5 checksums to detect faulty disks). Compared to the 6.0 award not much has changed in the design, although I did get my hands on the “official” corporate font this time. It strikes me that there’s a lot of wasted space on the back of the card though. Next time I’ll make better use of the space by incorporating statistics about the release. I actually have the design all ready to go, but you’ll have to wait until after the release to see it. Don’t fret though, the 6.2 release is expected soon!

Makefile hacks: automatically split long command lines

If you’ve worked on a large build system you’ve probably bumped into this error, or one like this:

gmake: execvp: /bin/sh: Argument list too long

This error means the length of some command-line in your makefile has grown past the system limit, which is typically in the 32 to 256 kilobyte range. It’s surprisingly easy to hit that limit. You start with a small list of object files to be linked together. Over time you add more, and the command-line gets a little longer. Add a few more and it gets longer still. Before you know it you have a monster command-line and your build starts failing.

The solution to this problem is simple: split the long command-line into several shorter command-lines. For example, ar r libraries/lib.a objects/foo.o objects/bar.o objects/baz.o objects/boo.o objects/bang.o becomes something like this:

ar r libraries/lib.a objects/foo.o objects/bar.o
ar r libraries/lib.a objects/baz.o objects/boo.o
ar r libraries/lib.a objects/bang.o

Simple in theory, but tedious to do by hand. And doing it manually is like putting a ticking time-bomb into your makefile — it’s only a matter of time before your build grows enough that you have to go through this exercise again.

I recently ran across a clever solution that exploits the $(eval) function in GNU make to split long command-lines automatically, eliminating the tedium and the time-bomb. After I show you the solution, I’ll explain it piece-by-piece.

The max_args function

The solution is a user-defined function called max_args that splits long command-lines into equal-length chunks:

1
2
3
4
5
6
7
8
9
define max_args
$(eval _args:=)
$(foreach obj,$3,$(eval _args+=$(obj))$(if $(word $2,$(_args)),$1$(_args)$(EOL)$(eval _args:=)))
$(if $(_args),$1$(_args))
endef
define EOL
endef

And an example of its use:

1
2
3
OBJS:=a b c d e f g h
all:
@$(call max_args,echo,2,$(OBJS))

The max_args function takes three parameters: the base command-line, the number of arguments per “chunk”, and the complete list of arguments. It expands to a series of command-lines — one for each chunk of arguments.

The trick behind max_args is the use of $(eval) to update a variable as a side-effect of gmake’s regular variable expansion activity. If you’re not familiar with gmake variable expansion, here’s a quick rundown: when gmake finds a variable or function reference, like $(something), it replace the entire reference with an expanded value. In the case of a variable that’s just the value of the variable. Most variables in gmake are recursive which means that if the variable value itself contains embedded variable references, those will be expanded as well, recursively. In the case of a function, gmake evaluates the function, and replaces the reference with the computed value.

The meat of max_args is on line 3. It starts with the $(foreach) function, which evaluates its third argument, the body of the loop, once for each word in its second argument — in this case, the list of objects passed in the call to max_args.

In max_args, the loop body has two components. The first is a call to $(eval), which simply appends the current value of the loop variable to an accumulator called _args.

The second component of the loop body uses $(if) and $(word) to check the length of _args. The $(word) function returns the nth word from a list, or an empty string if there are fewer than n words in the list. The $(if) function expands its second argument (the then clause) only if its first argument (the condition) expands to a non-empty string, so together these functions check if _args has the desired number of words, and if so the then clause of the $(if) is expanded.

The then clause of this $(if) has two components. The first constructs a completed command-line by concatenating the base command-line, here given by $1, the first argument to the original max_args call; the accumulated arguments; and a newline character. Thanks to the rules of gmake expansion, this command-line is added to the overall expansion result for the max_args function. The second part of the then clause uses $(eval) to reset the accumulator

If the chunk size does not evenly divide the number of arguments, the stragglers are emitted in a final command-line on the last line of max_args.

Limitations

max_args is handy but it has one significant limitation: command-line length limits are based on the number of bytes in the command-line, not the number of words, in it. Unfortunately, gmake has no built-in way to count the number of characters in a string. gmake does provide the $(words) built-in, so that’s what max_args uses. That just means that to use it effectively you have to take a guess at the number of arguments that will fit in a single command-line, for example by dividing the length limit by the average number of characters in each argument, then subtracting some to allow some buffer for outliers.

6 reasons your development team should be using instant messaging

The ElectricAccelerator development team sits at desks less than 30 feet apart, but despite our close proximity, we don’t often speak to one another. To an outside observer this may seem to be a sign of disfunction in the team — after all, developers have to communicate to work effectively. Some people think we’re obviously not communicating, but the truth is that we’re not obviously communicating! That’s because we use instant messaging for most of our communications, including status updates, technical collaboration and even code reviews, rather than face-to-face conversations. I believe this has made my team more connected and more productive. Here are six reasons why instant messaging trumps face-to-face conversations for software teams.

1. Logging

The key advantage of instant messaging is that all conversations are logged automatically. As a result I’ve got records of every conversation with every member of my team for the past two years. That’s proven invaluable on a few occasions, to provide additional context for decisions made weeks or months earlier. Obviously this is not a replacement for other types of project documentation, but it is a fantastic supplement.

2. Non-intrusive

The second most important advantage of instant messaging is that it’s relatively non-intrusive, at least compared to a face-to-face conversation. We all know how important it is to get into and preserve a state of flow when programming. Spoken conversations, by social convention, command your immediate attention — effectively an interrupt of the highest order. When somebody comes to my desk to ask me something in person, they are implicitly saying, “What I have to say to you is more important than anything else you might be doing right now.” Sometimes that’s true, but many times it’s not. And yet every time somebody initiates a face-to-face conversation with me, it destroys whatever flow I might have developed.

In contrast, instant messaging allows me to defer a response until I reach a good breaking point, so people can ask questions without interrupting me.

3. Non-disruptive

Our office has an open floor plan, which means that instead of individual offices or cubicles, we have a single big room. This layout worked very well when the company had only 6 people, who were all working on the same project. Now the company employs over 100 people, with two separate development teams working on completely different products, so the open layout doesn’t work quite so well. Conversations between other people can be very distracting when you’re heads down on a tricky technical problem. By using instant messaging instead of face-to-face conversations, we significantly reduce the distraction for our collegues.

4. Simultaneous conversations

Carrying on multiple face-to-face conversations on disparate topics is practically impossible, but doing the same via instant messenger is simple. Every IM client I’ve seen displays the last several messages of each active conversation, so you have context when a new message arrives. That signficiantly reduces the mental burden associated with each conversation, so it becomes possible to sustain several simultaneously. I often have five conversations “active” during the work day, and sometimes even more.

5. Consistency

Unlike face-to-face conversations, IM works well regardless of the relative locations of the conversants. That means that it doesn’t matter if my colleague is in the office with me, or working from home, or working from a customer site, or halfway around the world. I can use the same tool to communicate with them, which in turn means I don’t have to change the way I work to accomodate changes in the way they are working.

6. Versatility

One final advantage of instant messaging compared to face-to-face conversation is the versatility of the medium. I can trivially share a code fragment with somebody via IM, or a link to an online resource. Try doing that in a face-to-face conversation: “Yeah, you should check out the STL reference docs, at aich tee tee pee colon slash slash double you double you double you dot …”.

Instant messaging: give it a try

If you’re not already using instant messaging in your development team, give it a try. There are multiple free IM services out there, and there are good free IM clients on every platform, including smart phones, so you’ve really got nothing to lose — but you might gain a more efficient, productive team. It worked for us.

LEGO “Ship It!” Awards

Scriptics Connect 1.1 "Ship It!" Award

What am I supposed to do with this?


When we wrapped up the ElectricAccelerator 6.0 release recently, I wanted to give my teammates something to commemorate the release. Traditionally these are called “Ship It!” awards, and they often take the form of a Lucite plaque or trophy, or even a physical copy of the product (on DVD or CD, for example) locked inside an acrylic block. I’ve gotten a couple of those over the years, and honestly I think they’re kind of a waste. The last one I got went on a shelf to collect dust for a few years before being relocated to the trash heap, which is a shame because those things are expensive. Really expensive. I can’t even imagine the cost of the monster awards that Microsoft gives out.

So I don’t really like the usual embodiment of the “Ship It” award, but I do really like the underlying idea. After all, shipping a software release is a significant accomplishment, the culmination of months or even years of effort by a team of smart individuals. And unlike many other human endeavors, there’s nothing tangible when you’re finished — no bridge spanning the bay nor tower reaching to the heavens. Having something to commemorate the accomplishment seems fitting, and it’s another small way that I can show my appreciation for everybody’s contributions.

The Ideal “Ship It!” Award

To me, the ideal “Ship It!” award has the following attributes:

  • Themeable: I wanted something I could customize for each release, while maintaining consistency across releases. I plan to make this a tradition.
  • Inexpensive: I wanted something I could bankroll myself, so I could retain complete creative control.
  • Compact: I wanted something that wouldn’t take up much space, so it would be portable and easy to display.
  • Geek appeal: I wanted something that my teammates would think is cool. Chunks of Lucite just don’t cut it.

LEGO “Ship It!” Awards

LEGO race car driver minifig

The winner!


After a few days of idle brainstorming and bouncing ideas off my manager and co-conspirator, I had what seemed like a great idea: LEGO minifigs. I could get a bunch of a specific LEGO minifig and give one to each person on the team. It fit all my criteria. There have been over 4,000 different minifigs released since 1978, according to The Cult of LEGO. In the last two years alone LEGO has release five minifig packs, each with 16 completely new figures, so I can count on having a unique character for every feature release for the next several years. Minifigs are cheap, too — the majority can be bought for as little as a couple dollars each on Amazon or ebay. They’re obviously small. And of course, minifigs are dripping with geek appeal. What techie doesn’t like LEGO?

There was just one small problem. Minifigs are a little bit too small. There’s nowhere to put the information that would identify what it represented — the product name, release version and date, and so on. A couple more days of brainstorming gave me the solution: custom baseball cards. There are several companies that will print custom baseball cards. These outfits are obviously intended for children’s sports teams, but they will happily print cards with whatever graphic you want. You just have to create images of the front and back of your card and upload to their website. And like the minifigs themselves, the cards are inexpensive, at about $1 per card.

The ElectricAccelerator 6.0 “Ship It!” Award

For the ElectricAccelerator 6.0 “Ship It!” Award, I chose the race car driver shown above (because Accelerator is all about performance, of course!). I bought the minifigs on ebay. I spent a couple hours designing the card, then ordered them from CustomSportsProducts.com. The front shows the minifig, the product name, version, and release date, and the major new features; the back lists the names of everybody on the team. Total cost for awards for the entire team was about $40 for materials — about the cost of just one traditional Lucite-based “Ship It” award.

I was a little nervous when I presented the awards to my team a couple weeks ago, but as it turns out I needn’t have been! The reception was overwhelmingly positive. Although I hadn’t explicitly planned it this way, the minifigs actually arrived unassembled, in individual pouches. Immediately upon getting theirs, each person dumped out the pieces and started assembly — it was practically instinctive! Several people commented out loud that the award was “Awesome!” or “Really cool,” and, of course, “Kinda nerdy, but cool!” With that kind of reaction, you can bet that I’m already planning for the next release.

And finally, here’s a picture of the ElectricAccelerator 6.0 “Ship It!” Award, as it is proudly displayed on my desk:

ElectricAccelerator 6.0 "Ship It!" Award

Who doesn't love LEGO?

ElectricAccelerator Job Compendium

The fundamental unit of work in ElectricAccelerator is the job. Most of the time, you can think of a job as all the commands that must be run in order to create or update a single build output, but in truth that describes only one type of job. There are actually several different job types, each with a distinct purpose in the structure of a build and the way Accelerator executes the build. You can determine the type of a job from the type attribute on the <job> tag in Accelerator annotation files.

Having some familiarity with the job types and their use will make it easier for you to understand Accelerator performance and behavior, so I wrote this guide to introduce them. First I’ll describe the jobs used by ElectricMake (emake), and then the jobs used by Electrify.

ElectricMake jobs

In order to make the descriptions more concrete, I created a simple reference build that uses each of the most common job types, so you can see exactly how the jobs relate to a real build. Here’s the reference build makefile:

prog: 
        @$(MAKE) sub/prog
        @cp sub/prog ./prog

sub/prog: sub/main.c
        @cat $< > $@

setup:
        rm -rf prog sub
        mkdir sub
        echo "int main() { return 0; }" > sub/main.c
        touch prog2

To run the build, first run emake setup, which will create the files and directory structure needed by the build, then run emake –emake-maxagents=1 –emake-annodetail=basic –emake-annofile=emake.xml prog prog2. That will produce a build with thirteen different jobs, in two different make instances. Here’s an overview of how those jobs fit together (click for a larger image):


parse

The first job in any make invocation (and therefore the first job of any emake build) is a parse job, during which emake reads and interprets the makefiles used in that make instance. The output of a parse job is a list of all the jobs in the make instance, along with a list of targets that must be built, the commands to build those targets, and the relationships between them.


exist

Existence jobs (marked as type “exist” in annotation) are used to check for the existence of makefiles and command-line goals for which no rule was found. In our reference build, you can see an existence job in the top-level make for the makefile itself, as well as one for the file prog2, which has no rule in the makefile.


remake

Remake jobs are at the center of emake’s emulation of GNU make makefile remaking feature. In a remake job, emake checks every makefile that was read during the parse job for two things:

  1. Is there a rule to rebuild the makefile; and
  2. Was the makefile actually rebuilt (because it was out-of-date).

If any makefile was rebuilt, then emake restarts the make instance — all the way back to the parse job.

Because makefile remaking is a gmake-specific feature, you will only see remake jobs when emake is emulating gmake — not when it’s emulating NMAKE.

One last note about remake jobs: emake overloads the use of the <failed> tag inside a remake job to indicate not failure, but whether or not the job determined that the make instance should be restarted. If yes, then the remake job will include a <failed> tag with the code attribute set to 1; if not, then the remake job will have no <failed> tag.


rule

Rule jobs are the real workhorses of a build. Each rule job encapsulates the commands needed to update one target in the build — literally the body of a rule from a makefile — and a rule always has an associated output target (or targets), which is identified by the name attribute of the job tag in annotation. In our reference build, the rule job in the toplevel make instance corresponds to this rule in the makefile:

prog: 
        @$(MAKE) sub/prog
        @cp sub/prog ./prog

If you look at the annotation for the job you’ll notice that only $(MAKE) sub/prog is actually in the rule job — because the cp sub/prog ./prog was automatically split into a continuation job when emake detected the recursive make invocation.


follow

A follow job serves two purposes. First, it is a connection point that allows emake to tie the end job of a recursive make invocation back into the ordered list of jobs in the parent make. Second, it is the means by which emake propagates the error status of a recursive make to the parent make.

Follow jobs always have an associated rule or continuation job, identified by the partof attribute on the job tag in annotation. That is the job that spawned the recursive make which the follow job is associated with. Of course, not all rule and continuation jobs have an associated follow job — follow jobs only show up when a recursive make was invoked.


continuation

A continuation job represents the “leftover” commands that follow a recursive make invocation in a rule (or continuation) job. In our reference build, emake creates a continuation job for cp sub/prog ./prog, because that command follows a recursive make invocation. Continuation jobs, like follow jobs, are always associated with a rule (or another continuation) job, identified by the partof attribute on the job tag in the annotation.

The reason for splitting the extra commands into a separate job is simple: that allows emake to easily specify the serial order of those commands relative to the jobs in the recursive make — the commands in the continuation come after the jobs in the recursive make.


end

The last job in every make instance (and therefore the last job of every emake build) is an end job. End jobs exist primarily to handle end-of-the-make cleanup, such as removing intermediate targets or temporary inline files that were created while executing the other jobs in the make.


(Not pictured) subbuild

Subbuild jobs, which were not used in the reference build, are part of emake’s subbuild feature. They are simply a mechanism to inject a recursive make invocation into the build, just as you would get if you had a rule job with a $(MAKE) command, but without the tedium of actually running that command.

Electrify jobs

Electrify uses much of the same underlying machinery that emake does, but it has its own set of job types for its particular needs.


alpha

Every electrify build starts with an alpha job, which is a trivial placeholder marking the start of the list of jobs.


external

External jobs represent commands invoked by the electrified build tool which are distributed to the cluster. These are similar to emake’s rule jobs, except they will only ever run a single command, while rule jobs may run any number of commands.


update

Update jobs represent filesystem modifications made by commands invoked by the electrified build tool but not distributed to the cluster. These modifications are detected with the aid of electrifymon.


omega

Every electrify build ends with an omega job, which, like the alpha job, is a trivial placeholder.

Conclusion

I hope you’ve found this post informative. Questions or feedback? Please feel free to comment below!

Maven 3 performance: why all the hubbub, bub?

First off let me say that I have never used Maven, and I honestly don’t know much about it. But I do know a lot about build performance and parallel builds, and I love to look at benchmarks, so naturally I was intrigued by this article on Dr. Dobb’s, which includes a comparison of performance between Maven 2 and Maven 3, the latest release; as well as a comparison between serial and parallel builds using the new parallel build feature in Maven 3.

The thing is, the improvement is not very impressive. Sonatype describes Maven 3 as “dramatically faster” than Maven 2, but according to the published benchmarks, Maven 3 shaves a paltry 5-10 seconds off the Maven 2 build time:

Project Maven 2 Maven 3 Improvement
Maven SCM Trunk 3:20 3:15 5s (2.5%)
“Corporate” 1:04 0:54 10s (15%)

The parallel build feature does better, executing the benchmark builds 1.35x faster (a 25% reduction in build time):

Project Serial Parallel (4 threads) Improvement
Maven SCM trunk 3:15 2:26 49s (25%)
“Corporate” 0:54 0:40 14s (25%)

That’s nothing to scoff at, I suppose, but consider that to obtain that 1.35x speedup, they used 4 threads of parallel execution — that’s the equivalent of make -j 4. With 4 threads of parallel execution you’ll see a 4x speedup, ideally. The benchmark builds here are small, so they are probably dominated by a few large jobs, but even so, with 4 threads you ought to be able to get at least a 2x speedup. For making the jump from serial to 4-way parallel builds, a 1.35x improvement is pretty disappointing.

At first I thought perhaps the author rushed when putting together the Dr. Dobb’s article, and just made a poor choice of benchmarks. But in fact the same benchmarks have been used in a Sonatype blog months before the Dr. Dobb’s article, and apparently in a Jfokus 2011 presentation by Dennis Lundberg.

Where’s the beef?

So what am I missing here? It seems there are only two possibilities:

  1. The performance improvements between Maven 2 and Maven 3 are actually impressive in some cases, and the Maven developers just couldn’t be bothered, over a period of several months, to find more compelling benchmarks; or
  2. The performance improvements between Maven 2 and Maven 3 are just not that impressive, and the published benchmarks are, sadly, representative of the best Maven 3 has to offer at this time.

I know where my money is on that bet.

What’s new in ElectricAccelerator 5.4.0

This month, Electric Cloud announced the release of ElectricAccelerator 5.4. This version adds a lot of great new features, including support for GNU Make’s .SECONDEXPANSION feature and the use of $(eval) in rule bodies, and compatibility with Cygwin 1.7.7. In addition to those long-awaited improvements, here are the things that I’m most excited about in this release:

New cluster utilization reports

Accelerator 5.4 includes two new reports designed to give you greater insight into the load on and utilization of your cluster: the Cluster Utilization report and the Sealevel report:

The Cluster Utilization report shows, over the course of a typical day, the average number of builds running and the average combined agent demand from all running builds. The Sealevel report shows the raw agent demand data, plotted over the course of a day. The colored bands correspond to various cluster sizes, including the current cluster size and several hypothetical sizes, so you can see at a glance how large you need to make the cluster in order to satisfy all the agent requests. The percentages on the right side of the graph indicate the portion of agent requests that are left unsatisfied with a cluster of the given size. In the example above, all but 1% of agent requests would be satisfied if the cluster had 40 agents.

Reduced directory creation conflicts

Raise your hand if you’ve ever seen this pattern in a makefile:

%.o: %.c
        @mkdir -p $(dir $@)
        @$(COMPILE.c) -o $@ $<

It’s a common way to ensure the output directory exists before trying to create a file in it. Unfortunately, with a strict application of Accelerator’s conflict detection algorithm, this pattern causes numerous conflicts and poor performance when the build is run without an up-to-date history file. In Accelerator 5.4.0, we improved the algorithm so that this common case is no longer considered a conflict. If you always run with a good history file, this change will not be helpful to you. But sometimes that’s not possible — for example, if you’re building third-party code that’s just gotten a major update — then you’re going to really love this improvement. The Android source code is a perfect example: a from-scratch no-history build of the Gingerbread base used to take 144 minutes. Now it runs in just 22 minutes on the same hardware — 6.5x faster.

New Linux sandbox implementation

The last feature I want to mention here is the new sandbox implementation for Linux. The sandbox is the means by which Accelerator is able to present a different view of the filesystem, from a different point of time during the build, to each of the jobs running concurrently on a given agent host. Without the sandbox, it would be impossible on Linux to simultaneously represent a given file as existent to one job, and non-existent to another.

In previous versions of Accelerator, the Linux sandbox implementation was effective, but ultimately limited in its capabilities. Chief among those limitations was an inability to interoperate with autofs 5.x. There were several workarounds available, but each of those in turn had its own shortcomings.

Accelerator 5.4 uses a different underlying technology to implement the sandbox component: lofs, the loopback filesystem. This is a concept borrowed from Solaris, which has had a vendor-supplied version for years; Linux has nothing that matches the depth of functionality provided by Solaris, so we wrote our own. The net result of this effort is that the limitations of the previous implementation have been entirely eliminated. In particular, Accelerator 5.4 can interoperate with autofs 5.x without the need for any workarounds or awkward configuration.

Afterthoughts

It’s been a long time in coming, but I think it was well worth the wait. I’m very proud to have been part of this product release, and I’m thrilled with the work my team has put into it.

Accelerator 5.4 is available immediately for current customers. New customers should contact sales@electric-cloud.com.

Eternal vigilance and multithreaded programming

Given the following:

  • A multithreaded application written in C++, using the STL.
  • A class Mutex, which is a thin wrapper around a pthread_mutex_t.
  • A class Lock, which implements the RAII pattern for acquiring and releasing a Mutex.
  • A class FileInfo, which stores information about a file. The getSize() method retrieves the current size of the file. Because another thread may be changing the size of the file, getSize() is internally locked.
  • A class Cache, which manages a map of filenames to FileInfo objects.

Can you spot the bug in this bit of code?

class Cache {
protected:
    typedef std::map<string, FileInfo *> map_type;

    Mutex mLock;
    map_type mMap;

public:
    void insert(const string& filename, FileInfo *info)
    {
        // Add a new file to the cache.

        Lock lock(mLock);
        mMap.insert(make_pair(filename, info));
    }

    void invalidate(const string& filename)
    {
        // Eject a file from the cache; this only
        // invalidates the cache entry, it does not
        // delete the associated FileInfo.

        Lock lock(mLock);
        map_type::iterator i = mMap.find(filename);
        if (i != mMap.end()) {
            mMap.erase(i);
        }
    }

    int64_t getSize(const string& filename)
    {
        // Get the current size of a file in the cache.

        map_type::iterator i;

        {
            // Grab the cache lock for the lookup.

            Lock lock(mLock);
            i = mMap.find(filename);
            if (i == mMap.end()) {
                return 0;
            }
        } // release cache lock

        // Relay the getSize() request to the FileInfo
        // object; this must happen <i>after</i> releasing
        // the cache lock, because FileInfo::getSize() 
        // can block.

        return i->second->getSize();
    }
};

Even if you’re familiar with the STL and multithreaded programming, the defect can be hard to spot. In fact, this one escaped my notice until a recent mysterious core dump alerted me to its presence, despite having read this code several times for other reasons.

Here’s the bug: in Cache::getSize(), the iterator i is accessed outside the protection of the Cache lock. You may think that this is safe, since the lookup occurs inside the lock. But suppose we have two threads running simultaneously, one calling getSize(“foo”), and the other calling invalidate(“foo”):

Thread A: getSize(“foo”) Thread B: invalidate(“foo”)
{
    Lock lock(mLock);
    i = mMap.find(filename);
    if (i == mMap.end()) {
       return 0;
    }
} // release cache lock
Lock lock(mLock);
i = mMap.find(filename);
if (i != mMap.end()) {
    mMap.erase(i);
}
return i->second->getSize();

By the time Thread A uses the iterator returned by map::find(), the call to invalidate() has, well, invalidated it. I suspect that the author of this code remembered that map::erase() invalidates the iterator used in the erase() itself, but not the corollary that map::erase() also invalidates any other iterator that references the same element.

If you think about how std::map is implemented, it’s obvious that erase() should have this property. Internally, std::map usually uses some type of balanced tree. With that foundation, the simplest implementation of std::map::iterator is just a thin wrapper around a pointer to a node in the tree. map::erase() must delete that node (otherwise, when would it be deleted?), so accessing an iterator after it has been erase‘d is literally a free memory read, an obvious memory management error.

In this case, the solution is simple: just extract the FileInfo pointer from the iterator before releasing the Cache lock, as follows:

    int64_t getSize(const string& filename)
    {
        // Get the current size of a file in the cache.

        FileInfo *info;

        {
            // Grab the cache lock for the lookup.

            Lock lock(mLock);
            map_type::iterator i = mMap.find(filename);
            if (i == mMap.end()) {
                return 0;
            }
            info = i->second;
        } // release cache lock

        // Relay the getSize() request to the FileInfo
        // object; this must happen <i>after</i> releasing
        // the cache lock, because FileInfo::getSize() 
        // can block.

        return info->getSize();
    }

Eternal vigilance

The interesting thing about this bug is that it would have been nearly impossible to detect with testing alone. This is why code reviews are so important in our profession, and why projects that really demand bug-free code rely so heavily on them. As they say, “Eternal vigilance is the price of multithreaded programming.” With that, I’m off to review some more code. Shouldn’t you?