post

ElectricAccelerator job statuses, or, what the heck is a skipped job?

If you’ve ever looked at an annotation file generated by Electric Make, you may have noticed the status attribute on jobs. In this article I’ll explain what that attribute means so that you can better understand the performance and behavior of your builds.

Annotation files and the status attribute

An annotation file is an XML-enhanced version of the build log, optionally produced by Electric Make as it executes a build. In addition to the regular log content, annotation includes details like the duration of each job and the dependencies between jobs. the status attribute is one of several attributes on the <job> tag:

<job id="J00007fba40004240"  status="conflict" thread="7fba4a7fc700" type="rule" name="a" file="Makefile" line="6" neededby="J00007fba400042e0">
<conflict type="file" writejob="J00007fba400041f0" file="/home/ericm/src/a" rerunby="J0000000002a27890"/>
<timing invoked="0.512722" completed="3.545985" node="ericm15-2"/>
</job>

The status attribute may have one of five values:

  • normal
  • reverted
  • skipped
  • conflict
  • rerun

Let’s look at the meaning of each in detail.

normal jobs

Normal jobs are just that: completely normal. Normal jobs ran as expected and were later found to be free of conflicts, so their outputs and filesystem modifications were incorporated into the final build result. Note that normal is the default status, so it’s usually not explicitly specified in the annotation. That is, if a job does not have a status attribute, its status is normal. If you run the following makefile with emake, you’ll see the all job has normal status:

1
all: ; @echo done

Here’s the annotation for the normal job:

<job id="Jf3502098" thread="f46feb40" type="rule" name="all" file="Makefile" line="1">
<command line="1">
<argv>echo done</argv>
<output src="prog">done
</output>
</command>
<timing weight="0" invoked="0.333677" completed="0.343138" node="chester-1"/>
</job>

reverted and skipped jobs

Reverted and skipped jobs are two sides of the same coin. In both cases, emake has determined that running the job is unnecessary, either because of an error in a serially earlier job, or because a prerequisite of the job was itself reverted or skipped. Remember, emake’s goal is to produce output that is identical to a serial GNU make build. In that context, barring the use of –keep-going or similar features, jobs following an error would not be run, so to preserve compatibility with that baseline, emake must not run those jobs either — or at least emake must appear not to have run those jobs.

That brings us to the sole difference between reverted and skipped jobs. Reverted jobs had already been executed (or had at least started) at the point when emake discovered the error that would have caused them not to run. Any output produced by a reverted job is discarded, so it has no effect on the final output of the build. In contrast, skipped jobs had not yet been started when the error was discovered. Since the job had not yet run, there is not output to discard.

Running the following simple makefile with at least two agents should produce one reverted job, b, and one skipped job, c.

1
2
3
4
5
6
7
all: a b c
a: ; @sleep 2 && exit 1
b: ; @sleep 2
c: b ; @echo done

Here’s the annotation for the reverted and skipped jobs:

<job id="Jf3502290"  status="reverted" thread="f3efdb40" type="rule" name="b" file="Makefile" line="5" neededby="Jf35022f0">
<timing weight="0" invoked="0.545836" completed="2.558411" node="chester-1"/>
</job>
<job id="Jf35022c0"  status="skipped" thread="0" type="rule" name="c" file="Makefile" line="7" neededby="Jf35022f0">
<timing weight="0" invoked="0.000000" completed="0.000000" node=""/>
</job>

conflict jobs

A job has conflict status if emake detected a conflict in the job: the job ran too early, and used a different version of a file than it would have had it run in the correct serial order. Any output produced by the job is discarded, since it is probably incorrect, and a rerun job is created to replace the conflict job. The following simple makefile will produce a conflict job if run without an emake history file, and with at least two agents:

1
2
3
4
5
all: a b
a: ; @sleep 2 && echo hello > a
b: ; @cat a

Here’s the annotation for the conflict job:

<job id="Jf33021d0"  status="conflict" thread="f44ffb40" type="rule" name="b" file="Makefile" line="5" neededby="Jf3302200">
<conflict type="file" writejob="Jf33021a0" file="/home/ericm/blog/melski.net/job_status/tmp/a" rerunby="J09b1b3c8"/>
<timing weight="0" invoked="0.541597" completed="0.552053" node="chester-2"/>
</job>

rerun jobs

A rerun job is created to replace a conflict job, rerunning the commands from the original conflict job but with a corrected filesystem context, to ensure the job produces the correct result. There are a few key things to keep in mind when you’re looking at rerun jobs:

  • By design, rerun jobs are executed after any serially earlier jobs have been verified conflict-free and committed to disk. That’s a consequence of the way that emake detects conflicts: each job is checked, in strict serial order, and committed only if it has not conflict. If a job has a conflict it is discarded as described above, and a rerun job is created to redo the work of that job.
  • It is impossible for a rerun job to have a conflict, since it is guaranteed not to run until all preceeding jobs have finished. In fact, emake does not even bother to check for conflicts on rerun jobs.
  • Rerun jobs are executed immediately upon being created, and while the rerun job is running emake will not start any other jobs. Any jobs that were already running when the rerun started are allowed to finish, but new jobs must wait until the rerun completes. Although this impairs performance in some cases, this conservation strategy helps to avoid chains of conflicts that would be even more detrimental to performance. Of course you typically won’t see conflicts and reruns if you run your build with a good history file, so in practice the performance impact of rerun jobs is immaterial.

The following simple makefile will produce a rerun job, if run without a history file and using at least two agents (yes, this is the same makefile that we used to demonstrate a conflict job!):

1
2
3
4
5
all: a b
a: ; @sleep 2 && echo hello > a
b: ; @cat a

And here’s the annotation fragment for the rerun job:

<job id="J09b1b3c8"  status="rerun" thread="f3cfeb40" type="rule" name="b" file="Makefile" line="5" neededby="Jf3302200">
<command line="5">
<argv>cat a</argv>
<output src="prog">hello
</output>
</command>
<timing weight="0" invoked="2.283755" completed="2.293563" node="chester-1"/>
</job>

Job status in ElectricAccelerator annotation

To the uninitiated, ElectricAccelerator job status types might seem cryptic and mysterious, but as you’ve now seen, there’s really not much to it. I hope you’ve found this article informative. As always, if you have any questions or suggestions, hit the comments below. And don’t forget to checkout Ask Electric Cloud if you are looking for help with Electric Cloud tools!

post

6 tips for writing robust, maintainable unit tests

Unit testing is one of the cornerstones of modern software development, but there’s a surprising lack of advice about how to write good unit tests. That’s a shame, because bad unit tests are worse than no unit tests at all. Over the past decade at Electric Cloud, I’ve written thousands of tests — a full build across all platforms runs over 100,000 tests. In this post I’ll share some tips for writing robust, maintainable tests. I learned these the hard way, but hopefully you can learn from my mistakes.

A key focus here is on eliminating so-called “flaky tests”: those that work almost all the time, but fail once-in-a-great-while for reasons unrelated to the code under test. Such unreliable tests erode confidence in the test suite and even in the value of unit testing itself. In the worst case, a history of failures due to flaky tests can cause people to ignore sporadic-but-genuine test failures, allowing rarely seen but legitimate bugs into the wild.

If that’s not enough to convince you to eliminate flaky tests, consider this: suppose each of your flaky tests fails just one time out of every 10,000 runs, and that you have a thousand such tests overall. At that rate, about 10% of your CI builds will fail due to flaky tests.

Now that I’ve got your attention, let’s see how to write better tests. Got some tips of your own? Add them in the comments!

1. Never use hardcoded network port numbers

The software I write involves network communication, which means that many of the tests create network sockets. In pseudo-code, a test for the server component looks something like this:

  1. Cleanup the previous server instance (if any).
  2. Instantiate the server on port 6003.
  3. Connect to the server via port 6003.
  4. Send some message to the server via the socket.
  5. Assert that the response received is correct.

At first glance this seems pretty safe: no standard service uses port number 6003, so there won’t be any contention with third parties for that port, and by front-loading the cleanup of the server we ensure that there won’t be any contention with our other tests for the port. And, of course, it (seems to) work! In fact, it probably works almost every time you run it. But rest assured: one day, seemingly at random, this test will fail.

For us the failures started after literally years and tens of thousands of executions. We never saw the same test fail twice, but the failure mode was always the same: “Only one usage of each socket address (protocol/network address/port) is normally permitted.” I wish I could say with certainty why the failures started happening — I suspect that our anti-virus software transiently grabs a dynamic port, and sometimes it just happens to grab the port that we planned to use.

Fortunately the solution is simple: instead of using a hardcoded port number, use a dynamic port assigned by the operating system. Usually that means binding to port number zero. This is slightly less convenient than a hardcoded port number, because you’ll have to query the socket after its bound to determine the actual port number, but that’s a small price to pay to be confident you’ll never have a spurious test failure due to this mistake.

2. Never use “sleep()” to synchronize test threads.

A common mistake when writing tests that involve multiple threads is to attempt to use sleep() to synchronize events in different threads. For example, you may have a server thread which opens a socket and waits for a connection. You want to test the behavior of the server thread, but you have to be sure that the socket is opened and ready to accept connections before you try to connect to it. A quick-and-dirty approach might look like this:

  1. Start server thread.
  2. Sleep a bit to give the thread time to get started.
  3. Open a connection to the server port.

There are some problems with this strategy. If you set the delay too short, occasionally the test will fail because the server socket won’t be ready — when you try to open the client-side connection you’ll get a connection refused error. Conversely, you can make the test fairly reliable by setting the delay very long — multiple seconds — but then the test will take at least that long to run, every time you run it.

Instead you should use a condition variable to synchronize the two threads. If that is not practical or not possible, a retry loop is a decent alternative. In that case, make sure that your loop doesn’t retry so frantically that it starves the other thread of CPU cycles to make progress. I like to use an exponential backoff, so initially I get a few retries at very short intervals, then the intervals become progressively longer to give the other thread more time to work:

1
2
3
4
5
6
7
delay = 20
total = 0
while total < 5000:
# break if socket can be opened, else sleep/retry
time.sleep(delay / 1000.0)
total += delay
delay += delay

3. Never rely on timing data to verify behavior.

Another mistake is using the observed duration of an operation to assert that a particular behavior has been implemented. For example, to test that a socket read operation times out if no data is received within a set period of time, you might write test code like this:

  1. Set socket timeout to 200ms
  2. Mark start time
  3. Attempt to read from socket
  4. Mark end time
  5. Assert that end minus start is between 200ms and 250ms

The problem is that your test runner might get put to sleep between step 2 and 3, and again between step 3 and step 4 — there’s just no way to control what else might be happening on the computer executing the tests. That means that the delta between the times could be significantly higher than the 200ms you expect. Depending on your hardware and timers, it could even be less than 200ms, despite your code being implemented correctly!

There are at least two robust alternatives to implement this test. The first is to add a counter to the code under test, to be incremented whenever the read operation times out. In the test you could check the value before and after the attempted read, and reasonably expect the value to increased by exactly one. The second approach is to forgo the explicit validation altogether — if the test completes, then the timeout must be operating correctly. If the test hangs, then the timeout must not be operating.

4. Never rely on implicit file timestamps.

If your software is sensitive to file timestamps, such as for determining whether file X is newer than file Y, then avoid trusting implicit timestamps in your tests. For example, you might write test code like this:

  1. Create “old” file Y.
  2. Create “new” file X.
  3. Verify that the unit under test behaves correctly given that X is newer than y.

Superficially this looks sound: X is created after Y, so it should have a timestamp later than Y. Unless, of course, it doesn’t, which can happen for a variety of odd reasons. For example, the system clock might get adjusted underneath your feet, due to NTP or another time-synchronization system. I’ve even seen cases where the file creations happen so quickly that the two files effectively have identical timestamps!

The fix is simple: explicitly set the timestamps on the files to ensure that the relationship is as expected. Generally you should specify a difference of at least 2 full seconds, to accommodate filesystems that have very low resolution timestamps. You should also avoid relying on subsecond timestamp resolution, again to accommodate filesystems that don’t support very high-resolution timestamps.

5. Include diagnostic information in test assertions.

You can save yourself a lot of trouble by arranging your test assertions so that they provide detailed information about any failures, rather than simple telling you yes or no, the assertion passed. For example, here’s an unhelpful assertion:

1
CPPUNIT_ASSERT(errors.empty());

If the errors variable contains error text, the assertion will fail — but you’ll have no idea what the errors were, and thus you’ll have no idea what went wrong in the test. In contrast, here’s an informative assertion:

1
CPPUNIT_ASSERT_EQUAL(string(""), errors);

Now if the assertion fails the test harness will show you value of errors, so you’ll have some useful information to start your debugging.

6. Include an explanation of the test in the comments.

Finally, don’t forget to put comments in your test code! Explain how the test works, and why you believe the test actually exercises the feature that you think it tests. It may seem a bit tedious, but remember that the rest of your team may not be as familiar with the code as you are, and they may not know what steps are needed to elicit a particular response from the code. For that matter, in a few months or years you may not remember how to test what you’re trying to test. Such comments are invaluable when updating tests after a refactoring, to understand how the test should be adjusted, as well as when debugging — to understand why a test failed to expose a defect. Here’s an example:

1
2
3
4
5
# Test methodology: create a Foo object, then try to set
# the Froznitz attribute to 5. This should produce an error
# because 5 is not a valid Froznitz value. See that the right
# type of exception is thrown, and that the error text is
# correct.

Summary

Everybody knows it’s important to write unit tests. Following the suggestions here will help make sure that your tests are reliable and maintainable. If you have tips of your own, add them in the comments!

post

#pragma multi and rules with multiple outputs in GNU make

Recently we released ElectricAccelerator 6.2, which introduced a new bit of makefile syntax — #pragma multi — which allows you to indicate that a single rule produces multiple outputs. Although this is a relatively minor enhancement, I’m really excited about it because this it represents a new direction for emake development: instead of waiting for the GNU make project to add syntactic features and then following some time later with our emulation, we’re adding features that GNU make doesn’t have — and hopefully they will have to follow us for a change!

Unfortunately I haven’t done a good job articulating the value of #pragma multi. Unless you’re a pretty hardcore makefile developer, you probably look at this and think, “So what?” So let’s take a look at the problem that #pragma multi solves, and why #pragma multi matters.

Rules with multiple outputs in GNU make

The problem we set out to solve is simply stated: how can you specify to GNU make that one rule produces two or more output files? The obvious — but wrong — answer is the following:

1
2
foo bar: baz
touch foo bar

Unfortunately, this fragment is interpreted by GNU make as declaring two rules, one for foo and one for bar — it just so happens that the command for each rule creates both files. That will do more-or-less the right thing if you run a from-scratch, serial build:

$ gmake foo bar
touch foo bar
gmake: `bar' is up to date.

By the time GNU make goes to update bar, it’s already up-to-date thanks to the execution of the rule for foo. But look what happens when you run this same build in parallel:

$ gmake -j 2 foo bar
touch foo bar
touch foo bar

Oops! — the files were updated twice. No big deal in this trivial example, but it’s not hard to imagine a build where running the commands to update a file twice would produce bogus output, particularly if those updates could be happening simultaneously.

So what’s a makefile developer to do? In standard GNU make syntax, there’s only one truly correct way to create a rule with multiple outputs: pattern rules:

1
2
%.x %.y: %.in
touch $*.x $*.y

In contrast with explicit rules, GNU make interprets this fragment as declaring a single rule that produces two output files. Sounds perfect, but there’s a significant limitation to this solution: all of the output files must share a common sequence in the filenames (called the stem in GNU make parlance). That is, if your rule produces foo.x and foo.y, then pattern rules will work for you because the outputs both have foo in their names.

If your output files do not adhere to that naming limitation, then pattern rules can’t help you. In that case, you’re pretty much out of luck: there is no way to correctly indicate to GNU make that a single rule produces multiple output files. There are a variety of hacks you can try to coerce GNU make to behave properly, but each has its own limitations. The most common is to nominate one of the targets as the “primary”, and declare that the others depend on that target:

1
2
3
bar: foo
foo: baz
touch foo bar

Watch what happens when you run this build serially from scratch:

$ gmake foo bar
touch foo bar
gmake: Nothing to be done for `bar'.

Not bad, other than the odd “nothing to be done” message. At least the files weren’t generated twice. How about running it in parallel, from scratch?

$ gmake -j 2 foo bar
touch foo bar
gmake: Nothing to be done for `bar'.

Awesome! We still have the odd “nothing to be done” message, but just as in the serial build, the command was only invoked one time. Problem solved? Nope. What happens in an incremental build? If you’re lucky, GNU make happens to do the right thing and regenerate the files. But in one incremental build scenario, GNU make utterly fails to do the right thing. Check out what happens if the secondary output is deleted, but the primary is not:

$ rm -f bar && gmake foo bar
gmake: `foo' is up to date.
gmake: Nothing to be done for `bar'.

That’s right: GNU make failed to regenerate bar. If you’re very familiar with the build system, you might realize what had happened and think to either delete foo as well, or touch baz so that foo appears out-of-date (which would cause the next run to regenerate both outputs). But more likely at this point you just throw your hands up and do a full clean rebuild.

Note that all of the alternatives in vanilla GNU make have similar deficiencies. This kind of nonsense is why incremental builds have a bad reputation. This is why we created #pragma multi.

Rules with multiple outputs in Electric Make

By default Electric Make emulates GNU make, so it inherits all of GNU make’s limitations regarding rules with multiple outputs — with one critical exception. Even when running a build in parallel, Electric Make ensures that the output matches that produced by a serial GNU make build, which means that even the original, naive attempt will “work” for full builds regardless of whether the build is serial (single agent) or parallel (multiple agents).

Given that foundation, why did we bother with #pragma multi? There are a couple reasons:

  1. Correct incremental builds: with #pragma multi you can correctly articulate the relationships between inputs and outputs and thus ensure that all the outputs get rebuilt in incremental builds, rather than using kludges and hoping for the best.
  2. Out-of-the-box performance: although Electric Make guarantees correct output of the build, if you don’t have an up-to-date history file for the build you may waste time and compute resources running commands that don’t need to be run (work that will eventually be discarded when Electric Make detects the error). In the examples shown here the cost is negligible, but in real builds it could be significant.

Using #pragma multi is easy: just add the directive before the rule that will generate multiple outputs:

1
2
3
#pragma multi
foo bar: baz
touch foo bar

Watch what happens when this makefile is executed with Electric Make:

$ emake foo bar
touch foo bar

Note that there is no odd “is up to date” or “nothing to be done” message for bar — because Electric Make understands that both outputs are created by a single rule. Let’s verify that the build works as desired in the tricky incremental case that foiled GNU make — deleting bar without deleting foo:

$ rm -f bar && emake foo bar
touch foo bar

As expected, both outputs are regenerated: even though foo existed, bar did not, so the commands were executed.

Summary: rules with multiple outputs

Let’s do a quick review of the strategies for creating rules with multiple outputs. For simplicity we can group them into three categories:

  • #pragma multi
  • The naive approach, which does not actually create a single rule with multiple outputs at all.
  • Any of the various hacks for approximating rules with multiple outputs.

Here’s how each strategy fares across a variety of build modes:

Electric Make GNU make
Full (serial) Full (parallel) Incremental Full (serial) Full (parallel) Incremental
#pragma multi N/A
Naive
Hacks


The table paints a grim picture for GNU make: there is no way to implement rules with multiple outputs using standard GNU make which reliably gives both correct results and good performance across all types of builds. The naive approach generates the output files correctly in serial builds, but may fail in parallel builds. The various hacks work for full builds, but may fail in incremental builds. Even in cases where the output files are generated correctly, the build is marred by spurious “is up to date” or “nothing to be done for” messages — which is why most of the entries in the GNU make side are yellow rather than green.

In contrast, #pragma multi allows you to correctly generate multiple outputs from a single rule, for both full and incremental builds, in serial and in parallel. The naive approach also “works” with Electric Make, in that it will produce correct output files, but like GNU make the build is cluttered with spurious warnings. And, unless you have a good history file, the naive approach can trigger conflicts which may negatively impact build performance. Finally, despite its sophisticated conflict detection and correction smarts, even Electric Make cannot ensure correct incremental builds when you’ve implemented one of the multiple output hacks.

So there you have it. This is why we created #pragma multi: without it, there’s just no way to get the job done quickly and reliably. You should give ElectricAccelerator a try.

try_eade_button2

%d bloggers like this: