post

What’s new in ElectricAccelerator 9.0?

Just a couple months ago, in October 2016, we released ElectricAccelerator 9.0. This version includes some really exciting new functionality and unlocks even more amazing performance than ever before. For the first time since 2008 we added support for a new build tool: ninja, an ultra-fast new make-like build tool and the workhorse at the center of the build for both chromium and Android (yes, that Android). And we’ve continued to expand the JobCache feature — a generalization of the parse avoidance feature introduced in Accelerator 7.0. With Accelerator 9.0 you can cache more types of work, including GCC/G++ compiles, clang compiles, Microsoft cl compiles, javac and javadoc, and Google’s new Jack compiler for Java code. Even better, you can share cached results with other developers to amplify the gains across an entire team. Read on for details.

Ninja emulation

Accelerator 9.0 introduces support for the ninja-based builds. Ninja is a very interesting build tool: conceptually similar to make, but radically simplified (at least so far!). Gone are things like built-in functions, pattern rules, vpath, conditional directives, and all the other things that make it hard to parse and evaluate makefiles quickly. This enables the ninja parser to evaluate “ninja files” unbelievably quickly, but at the cost of making ninja files verbose and ill-suited for creation by hand. Instead, ninja files are typically generated from some other process, such as CMake. The benefit to the end user then is extremely fast incremental builds: for example, in Android 6.0, using the original make-based build system, a no-touch build could take as much as a minute to run even though there’s no work to be done. In Android 7.0, using the new ninja-based build system, the same build can be completed in about 5 seconds!

ElectricAccelerator’s emulation of ninja is, I think, remarkably anticlimactic: to execute a ninja build, simply invoke emake –emake-emulation=ninja. That’s it. Here’s a very simple “Hello, world!” ninja file:

1
2
3
4
rule echo
description = Building $out
command = echo "Hello, world!"
build foo: echo

And the result of running this with emake –emake-emulation=ninja:

1
2
3
4
5
6
$ emake --emake-emulation=ninja
Starting build: local-32601
Building foo
Hello, world!
Finished build: local-32601 Duration: 0:00 (m:s)
$

As I said, it’s utterly uninteresting, which, quixotically, makes it very interesting: the integration is seamless and it “just works”. Even better, by running your ninja build with ElectricAccelerator you automatically and instantly take advantage of all the advanced acceleration and correctness features you’ve come to love about Accelerator: conflict detection, history, schedule optimization, annotation, even jobcache. It all just works.

JobCache Enhancements

In Accelerator 7.0 we introduced parse avoidance, a mechanism for caching the result of makefile parsing in one build in order to accelerate subsequent builds. Once we had shown that this type of caching could dramatically improve build performance we refactored the code behind parse avoidance to create a general purpose caching framework dubbed JobCache and in subsequent releases we’ve steadily expanded the types of work to which jobcache can be applied:

  • Accelerator 7.1: jobcache for Javadoc generation
  • Accelerator 8.0: jobcache for C/C++ compiles using clang/gcc/g++ (comparable to, but better than, ccache)
  • Accelerator 8.1: jobcache for C/C++ compiles using Microsoft cl

In Accelerator 9.0 we’ve expanded the reach of jobcache in two ways. First, we added support for caching javac and Jack compiles. Next, we added shared jobcache, which enables a team of developers to leverage jobcache collectively and reliably, eliminating redundant work across the entire team.

With shared jobcache, the team designates a “blessed” or “golden” build process to populate the cache — typically the nightly or continuous integration builds. This build simply uses jobcache as normal, using –emake-assetdir to specify a location on a shared filesystem to host the cache. Then, each developer explicitly requests to use the shared cache by adding –emake-shared-assetdir to the command-line when they invoke emake, specifying the same location. Once enabled, emake uses both the shared cache and the private cache during the build. For each job that uses jobcache:

  1. Check the shared jobcache for a matching entry.
    1. If a match is found in the shared jobcache, use it. Done!
    2. If a match is not found in the shared jobcache, continue.
  2. Check the private jobcache for a matching entry.
    1. If a match is found in the private jobcache, use it. Done!
    2. If a match is not found in the private jobcache, continue.
  3. Run the job as normal
  4. Save the result to the private cache

Note that the shared cache is never written to by the developers’ builds: updates are only saved in the private cache. In this way we can ensure that developers’ builds to not litter the shared cache with one-off or user-specific cache entries. Typically we expect that developers will see very good cache hit rates against the shared cache, perhaps 95% or better, since each developer modifies only a small fraction of the total source code at once. Thus shared jobcache multiples the savings from jobcache by the size of the team.

Dynamic file patching

The final feature of interest in Accelerator 9.0 is dynamic file patching. This is a mechanism by which emake can patch files on the fly as they are referenced during the build, based on the name, size and MD5 checksum of the original. This feature enables users to tweak build scripts or makefiles in order to improve performance or compatibility with Accelerator — critical in environments where there is limited ability to modify the original files directly.

Looking forward to 9.1

Accelerator 9.0 contains some really tremendous new features: the first new build tool emulation in almost a decade; shared jobcache; on-the-fly patching for those challenging environments where no other option will do. But as always, my eye is already on the next horizon: Accelerator 9.1. We have some big plans relating to performance and ease-of-use. It will be require a lot of hard work but I think we have the right team to do it. Stay tuned.

Accelerator 9.0 is available immediately for existing customers — support@electric-cloud.com to get the bits. New users can download ElectricAccelerator Huddle to take it for a test drive, or contact sales@electric-cloud.com for an evaluation of the enterprise edition.

post

What’s new in GNU make 4.2?

In May 2016 the GNU make team released GNU make 4.2. I’m pleased to see another release, though I find myself underwhelmed by both the timeline and the content of this release. When 4.1 came out just one year after 4.0 I hoped it was a sign that the GNU make project was switching to a more frequent and regular release cycle, as many software projects have done in the last several years. Although it can be a difficult adjustment this release cadence can have significant benefits like improving user engagement and reducing risk. But with the 4.2 release arriving nineteen long months after 4.1 it seems that GNU make has failed to make the transition.

Of course infrequent releases are not necessarily a problem, as long as the releases contain compelling new functionality. Unfortunately the new features in GNU make 4.2 are charitably described as “uninspiring” — though I’m sure each enhancement will be handy for the corner case it was designed to address. Of course GNU make is a mature project by any definition, and frankly it does what it does pretty well and has for a very long time — maybe it’s just “done”. But consider this: the past few years has seen something of an explosion in the build tool space, with several new build tools cropping up. Each of the following alternative build tools has had multiple releases in the last year, and each has innovative features that could be adopted by GNU make:

  • gradle, the default build tool for Android apps. Monthly releases. Reports and notifications.
  • bazel, the open-source version of Google’s internal build system. Ten releases already in 2016. Checksum-based up-to-date checks and minimization of test suite executions.
  • ninja, a make-like build tool. Two releases in the last twelve months. Resource pools and unbelievably fast parsing / low overhead.

So, what does GNU make 4.2 have to offer? Read on to see, and let me know in the comments if you disagree with my analysis.

.SHELLSTATUS variable

GNU make has had the $(shell) function for many years. This provides a mechanism by which you can get the result (stdout) of an arbitrary command into a make variable, where you can do whatever you like with it. One curious thing about $(shell) is that it doesn’t care at all whether the command you execute succeeds or not, so if you try to read a non-existent file, for example, with $(shell cat missing.txt), GNU make will simply return an empty string as the result of the shell invocation. In some cases you may want to actually check the exit status of that command, and in GNU make 4.2 you now can, by checking the value of .SHELLSTATUS, a new built-in variable that is automatically set to the exit code of the most recent $(shell) (or != assignment). Here’s a contrived example:

1
2
3
4
5
FOO := $(shell exit 1)
ifneq ($(.SHELLSTATUS),0)
$(error shell command failed!)
endif
all: ; @echo done

As you can see, it’s now possible to make your makefile react in whatever manner you deem appropriate when a shell invocation fails. Be advised, however: if you find yourself doing this, it may be an indication that your makefile is poorly written — almost every use of $(shell) is better handled by creating an actual rule to do whatever you were going to do with $(shell).

Read files with $(file)

The $(file) function was added to GNU make in the 4.0 release, in order to enable the creation of files directly from make — quite handy for those cases in which the content you want to write is so long it exceeds command-line length limits on your system. In 4.2 the $(file) function was extended so that you can use it to read files in addition to writing files. For example, SRCS := $(file <sourcelist.txt) would capture the content of the file sourcelist.txt in the variable SRCS, less the final newline in the file, if any (that last bit is for consistency with the $(shell) function).

Improved error reporting

GNU make 4.2 includes a small but very useful improvement in error reporting: previously when make encountered an error while executing a recipe, it would report only the name of the target being built, such as make: *** [all] Error 1. Starting with 4.2, this error message includes the makefile and line number of the specific command that produced the error: make: *** [Makefile:6: all] Error 1. This should make it much easier to debug large, complex builds — especially anything that uses double-colon rules to composite functionality from many fragments in distinct makefiles.

Bug fixes

In addition to the modest enhancements described above, the 4.2 release includes about three dozen other bug fixes. A glance at the resolution dates on those reveals that sometimes months passed with no updates. This makes me wonder why they didn’t cut a release at those points, even if it were just for bug fixes. My guess is that the project is trapped, in a sense: because the interval between releases is so long there’s a sense that each release has to be “perfect”, and because there’s an attempt to ensure each release is “perfect”, the interval between releases must be very long. Contrast this with a more agile approach, which can tolerate imperfect releases because the next release is just around the corner anyway. Combined with an ever expanding automated regression test suite it’s possible to gradually increase the bar for release quality, such that in fact the likelihood of a bad release goes down when compared with a project that has a long release cycle and is dependent on mostly manual testing.

GNU make isn’t going to go away any time soon, but I think the writing is on the wall: if it doesn’t start innovating again, developers will inevitably migrate to other build tools that do.

post

What’s new in GNU make 4.1?

October 2014 saw the release of GNU make 4.1. Although this release doesn’t have any really remarkable new features, the release is notable because it comes just one year after the 4.0 release — that’s the least time between releases in more than a decade. Hopefully, this is the start of a new era of more frequent, smaller releases for this venerable project which is one of the oldest still active projects in the GNU suite. Read on for notes about the new features in GNU make 4.1.

MAKE_TERMOUT and MAKE_TERMERR

Starting with 4.1, GNU make defines two additional variables: MAKE_TERMOUT and MAKE_TERMERR. These are set to non-empty values if make believes stdout/stderr is attached to a terminal (rather than a file). This enables users to solve a problem introduced by the output synchronization feature that was added in GNU make 4.0: when output synchronization is enabled, all child processes in fact write to a temporary file, even though in effect they are writing to the console. In other words, the implementation details of output synchronization may interfere with behaviors in child processes like output colorization which require a terminal for correct operation. If MAKE_TERMOUT or MAKE_TERMERR is set, then the user may explicitly direct such commands to maintain colorized output despite the fact that they appear to be writing to a file.

Enhanced $(file) function

The $(file) function was added in GNU make 4.0 to enable writing to files from a makefile without having to invoke a second process to do so. For example, where previously you had to do something like $(shell echo hello > myfile), now you can instead use $(file > myfile,foo). In theory this is more efficient, since it avoids creating another process, and it enables the user to easily write large blocks of text which would exceed command-line length limitations on some platforms.

In GNU make 4.1, the $(file) function has been enhanced such that the text to be written may be omitted from the function call. This allows $(file) to work as a sort of “poor man’s” replacement for touch, although having reviewed the bug report that resulted in this change, I think this is more an “enhancement of convenience” than a deliberate attempt to evolve the program. Of course I have to give a shout out to my friend Tim Murphy, who filed the bug report that led to this enhancement — nice work, Tim!

Relaxed constraints for mixing explicit and implicit rules

The final feature change in GNU make 4.1 is that make will emit a regular error rather than a fatal error (which terminates the build) when both explicit and pattern targets are specified as outputs of a rule, like this:

1
foo bar%: baz

This is an interesting change mostly for the high level of drama surrounding it. That bit of syntax is clearly illegal — in fact, if the pattern target is listed first rather than the explicit, GNU make has long identified this as invalid syntax, terminating the parse with *** mixed implicit and normal rules. Stop. Unfortunately, due to a defect in older versions of GNU make this construct is not prohibited when the explicit rule is named first.

In 3.82, the GNU make maintainers fixed the defect: whether or not the explicit target is named first, GNU make would identify the invalid syntax and terminate parsing. Everything was fine for about a year, and then? People flipped out. As it turns out, this construct is used by a prominant open source project: the Linux kernel. The offending syntax had been eliminated from the main development branch shortly after the 3.82 release, but third-party developers suddenly found themselves unable to build legacy versions of the kernel with the latest release of GNU make. A bug report was filed and generated 21 reponses, when the average GNU make bug report has only 3. Ultimately, the maintainers relented by reducing the severity to a non-fatal error for the 4.1 release — but with a stern message that this will likely become a fatal error again in a future release.

Bug fixes and thoughts

In addition to the bigger items identified above, the 4.1 release includes about two dozen other bug fixes. Overall, this release feels like a minor one — as often happens when release frequency increases, the individual releases become less interesting. From an agile/continuous delivery standpoint, that’s exactly what you want. But I’ve found that it is also difficult for a team that’s accustomed to less frequent releases with larger payloads to transition to smaller, more frequent releases while still incorporating large changes that take longer than one release to implement. Of course, one point does not make a line — that is, we can’t tell from this release alone whether the intention is to switch to a more frequent release cadence, or whether this release is an exception. If they are trying to increase the frequency, I think it will be very interesting to see how the GNU make development team adapts to the new cadence. Regardless, I’d like to congratulate the team for this release and I look forward to seeing what comes next.

post

HOWTO: Intro to GNU make variables

One thing that many GNU make users struggle with is predicting the value of a variable. And it’s no wonder, with the way make syntax freely mingles text intended for two very distinct phases of execution, and with two “flavors” of variables with very different semantics — except, that is, when the one seems to behave like the other. In this article I’ll run you through the fundamentals of GNU make variables so you can impress your friends (well, your nerdy friends, anyway) with your ability to predict the value of a GNU make variable at social gatherings.

Contents

Basics

Let’s start with the basics: a GNU make variable is simply a shorthand reference for another string of text. Using variables enables you to create more flexible makefiles that are also easier to read and modify. To create a variable, just assign a value to a name:

1
CFLAGS=-g -O2

Later, when GNU make sees a reference to the variable, it will replace the reference with the value of the variable — this is called expanding the variable. A variable reference is just the variable name wrapped in parenthesis or curly braces and prefixed with a dollar-sign. For example, this simple makefile will print “Hello, world!” by first assigning that text to a variable, then dereferencing the variable and using echo to print the variable’s value:

1
2
3
MSG = Hello, world!
all:
@echo $(MSG)

Creating variables

NAME = value is just one of many ways to create a variable. In fact there are at least eight ways to create a variable using GNU make syntax, plus there are built-in variables, command-line declarations, and of course environment variables. Here’s a rundown of the ways to create a GNU make variable:

  • MYVAR = abc creates the variable MYVAR if it does not exist, or changes its value if it does. Either way, after this statement is processed, the value of MYVAR will be abc.
  • MYVAR ?= def will create the variable MYVAR with the value def only if MYVAR does not already exist.
  • MYVAR += ghi will create the variable MYVAR with the value if MYVAR does not already exist, or it will append ghi to MYVAR if it does already exist.
  • MYVAR := jkl creates MYVAR if it does not exist, or changes its value if it does. This variation is just like the first, except that it creates a so-called simple variable, instead of a recursive variable — more on that in a minute.

In addition to the various assignment operators, you can create and modify variables using the define directive — handy if you want to create a variable with a multi-line value. Besides that, the define directive is equivalent to the normal VAR=VALUE assignment.

1
2
3
4
define MYVAR
abc
def
endef

If you’re using GNU make 3.82 or later, you can add assignment operators to the define directive to modify the intent. For example, to append a multi-line value to an existing variable:

1
2
3
4
define MYVAR +=
abc
def
endef

But there are still more ways to create variables in GNU make:

  • Environment variables are automatically created as GNU make variables when GNU is invoked.
  • Command-line definitions enable you to create variables at the time you invoke GNU make, like this: gmake MYVAR=123.
  • Built-in variables are automatically created when GNU make starts. For example, GNU make defines a variable named CC which contains the name of the default C compiler (cc) and another named CXX which contains the name of the default C++ compiler (g++).

Variable flavors

Now that you know how to create a GNU make variable and how to dereference one, consider what happens when you reference a variable while creating a second variable. Let’s use a few simple exercises to set the stage. For each, the answer is hidden on the line following the makefile. You can reveal the answer by highlighting the hidden text in your browser.

  1. Q1: What will this makefile print?
    1
    2
    3
    4
    ABC = Hello!
    MYVAR = $(ABC)
    all:
    @echo $(MYVAR)

    A1: Hello!

  2. Q2: What will this makefile print?
    1
    2
    3
    4
    5
    6
    ABC = Hello!
    MYVAR = $(ABC)
    all:
    @echo $(MYVAR)
    ABC = Goodbye!

    A2: Goodbye!

  3. Q3: What will this makefile print?
    1
    2
    3
    4
    5
    6
    ABC = Hello!
    MYVAR := $(ABC)
    all:
    @echo $(MYVAR)
    ABC = Goodbye!

    A3: Hello!

Don’t feel bad if you were surprised by some of the answers! This is one of the trickiest aspects of GNU make variables. To really understand the results, you have to wrap your brain around two core GNU make concepts. The first is that there are actually two different flavors of variables in GNU make: recursive, and simple. The difference between the two is in how GNU make handles variable references on the right-hand side of the variable assignment — for brevity I’ll call these “subordinate variables”:

  • With simple variables, subordinate variables are expanded immediately when the assignment is processed. References to subordinate variables are replaced with the value of the subordinate variable at the moment of the assignment. Simple variables are created when you use := in the assignment.
  • With recursive variables, expansion of subordinate variables is deferred until the variable named on the left-hand side of the assignment is itself referenced. That leads to some funny behaviors, because the value of the subordinate variables at the time of the assignment is irrelevant — in fact, the subordinate variables may not even exist at that point! What matters is the value of the subordinate variables when the LHS variable is expanded. Recursive variables are the default flavor, and they’re created when you use simply = in the assignment.

The second concept is that GNU make processes a makefile in two separate phases, and each phase processes only part of the text of the makefile. The first phase is parsing, during which GNU make interprets all of the text of the makefile that is outside of rule bodies. During parsing, rule bodies are not interpreted — only extracted for processing during the second phase: execution, or when GNU make actually starts running the commands to update targets. For purposes of this discussion, that means that the text in rule bodies is not expanded until after all the other text in the makefile has been processed, including variable assignments that physically appear after the rule bodies. In the following makefile, the text highlighted in green is processed during parsing; the text highlighted in blue is processed later, during execution. Again, to put a fine point on it: all of the green text is processed before any of the blue text:

1
2
3
4
5
6
ABC = Hello!
MYVAR = $(ABC)
all:
@echo $(MYVAR)
ABC = Goodbye!

Now the examples above should make sense. In Question 2, we created MYVAR as a recursive variable, which means the value of ABC at the time MYVAR is created doesn’t matter. By the time GNU make needs to expand MYVAR, the value of ABC has changed, so that’s what we see in the output.

In Question 3, we created MYVAR as a simple variable, so the value of ABC was captured immediately. Even though the value of ABC changes later, that change doesn’t affect the value of MYVAR.

Target-specific variables

Most variables in GNU make are global: that is, they are shared across all targets in the makefile and expanded the same way for all targets, subject to the rules outlined above. But GNU make also supports target-specific variables: variables given distinct values that are only used when expanding the recipe for a specific target (or its prerequisites).

Syntactically, target-specific variables look like a mashup of normal variable definitions, using =, :=, etc.; and prerequisite declarations. For example, foo: ABC = 123 creates a target-specific definition of ABC for the target foo. Even if ABC has already been defined as a global variable with a different value, this target-specific definition will take precedence when expanding the recipe for foo. Consider this simple makefile:

1
2
3
4
5
6
7
8
ABC = Hello!
all: foo bar
foo:
@echo $(ABC)
bar: ABC = Goodbye!
bar:
@echo $(ABC)

At first glance you might expect this makefile to print “Goodbye!” twice — after all, ABC is redefined with the value “Goodbye!” before the commands for foo are expanded. But because the redefinition is target-specific, it only applies to bar. Thus, this makefile will print one “Hello!” and one “Goodbye!”.

As noted, target-specific variables are inherited from a target to its prereqs — for example, the following makefile will print “Surprise!”, because bar inherits the target-specific value for ABC from foo:

1
2
3
4
5
6
ABC = Normal.
foo: ABC = Surprise!
foo: bar
bar:
@echo $(ABC)

You can do some neat tricks with this, but I urge you not to rely on the behavior, because it doesn’t always work the way you might think. In particular, if a target is listed as a prereq for multiple other targets, each of which have a different target-specific value for some variable, the actual value used for the prereq may vary depending on which files were out-of-date and the execution order of the targets. As a quick example, compare the output of the previous makefile when invoked with gmake foo and when invoked with gmake bar. In the latter case, the target-specific value from foo is never applied, because foo itself was not processed. With GNU make 3.82 or later, you can prevent this inheritence by using the private modifier, as in foo: private ABC = Surprise!.

Finally, note that target-specific variables may be applied to patterns. For example, a line reading %.o: ABC=123 creates a target-specific variable for all targets matching the pattern %.o.

Conclusion

If you’ve made it this far, you now know just about everything there is to know about GNU make variables. Congratulations! I hope this information will serve you well.

Questions or comments? Use the form below or hit me up on Twitter @emelski.

post

The ElectricAccelerator 7.2 “Ship It!” Award

Naturally with the release of ElectricAccelerator 7.2 a few weeks ago it’s time for another Accelerator “Ship It!” award. In keeping with our tradition, I gave each team member a LEGO figure that symbolized the release to me in some way, along with a a custom trading card giving the vital details: version, release date, and key features. Like a baseball card, the back is filled with a team roster and release statistics.

There are some great improvements in Accelerator 7.2 but there’s no particular unifying theme, so it was quite a challenge to choose a suitable minifig. One thing that stood out is that between the time management asked engineering to create a 7.2 release and the time that development was complete was only about three weeks. At the time we were actually in the midst of development on another release entirely, with a different set of new features. The 7.2 release was very much a, “Hey couldn’t you also cut a release right now while you’re at it?” And we did. Maybe it’s not as impressive as those guys that can cut a release every minute of every day, but for a team that usually does releases on a six-month cadence, a 3-week turnaround sounds like continuous delivery to me.

One thing enabled us to turn around the release that quickly: our code is (nearly) always shippable. That’s what led me to the minifig for this release: the sea captain, who’s always ready to “ship out” on short notice. Here’s the trading card that accompanied the figure:

Accelerator 7.2 "Ship It!" Card Front - click for larger version

Accelerator 7.2 “Ship It!” Card Front – click for full-size version

Accelerator 7.2 "Ship It!" Card Back - click for larger version

Accelerator 7.2 “Ship It!” Card Back – click for full-size version

Like the 7.1 card, the back of the 7.2 card incorporates stats for the current release, contextualized by stats for several previous releases:

  • Number of days in development. This is just the number of days since the previous feature release — it is assumed that whatever features are in the new release, we started working on them more-or-less after the last release went out.
  • JIRA issues closed.
  • Total KLOC. This metric gives the total size of the Accelerator code base in thousands of lines of code, as measured with the excellent Count Lines of Code utility by Al Danial. This measurement excludes comments and whitespace.
  • Change in KLOC. This is simply the arithmetic difference between the total KLOC for each release and its predecessor.

As always, my sincere gratitude goes to everybody on the Accelerator team, without whom this release would not have been. Thank you!

post

What’s new in ElectricAccelerator 7.2?

Wow, time flies! Another six months has come and gone, which means it’s time for another ElectricAccelerator feature release. Right on cue, ElectricAccelerator 7.2 dropped a couple weeks back on April 17, 2014. There’s no unifying theme to this release — actually we’re in the middle of a much more ambitious project that I can’t say much about quite yet, but over the last several months we’ve made a number of improvements to Accelerator core functionality, and we’re eager to get those out users. Thus we have the 7.2 release, with the following marquee features: dramatic Linux performance improvements for certain use cases, a key enhancement to our parse avoidance feature to improve accuracy, and expanded Linux platform support. Read on for the details.

Linux performance improvements

Accelerator 7.2 incorporates two performance improvements for Linux-based builds. The first is a redesign of the integration between the Electric File System (EFS) and the Linux kernel, which reduces lock contention in the EFS. Consequently, any build job that makes concurrent accesses to the filesystem should see some performance improvement. In one example, a build that executed two tar processes simultaneously in one job saw runtime drop from 11 minutes with Accelerator 7.1 to just 6 minutes with Accelerator 7.2, nearly 2x faster!

The second improvement is full support for the Linux d_type extension to the readdir() system call. On most Unix and Unix-like systems, the readdir() system call only gives the application programmer a couple pieces of information: the names of the files in a directory, and the inode number for those files. On Linux, filesystems may also include file type information in the results, which enables programs to operate more efficiently in some cases as they can avoid the overhead of an addition stat() call to get the file type. On a local filesystem that optimization is interesting but not necessarily game-changing; but with a distributed network filesystem like the EFS, that optimization can result in enormous improvements. In our benchmarks we saw jobs using find to scan large directory structures execute nearly 9x faster with Accelerator 7.2 versus Accelerator 7.1

Parse avoidance update

For large builds with many or complicated makefiles, Accelerator’s parse avoidance feature is a game-changer by dramatically reducing the time necessary to read and interpret makefiles at the start of a build. On the Android KitKat open-source build, parse avoidance reduces a 4 minute parse job to about 5 seconds — nearly 50x faster!. Since its introduction in Accelerator 7.0, parse avoidance has delivered jaw-dropping improvements like that in a wide variety of builds.

But use of this feature has been problematic in one specific use case: makefiles that use wildcards in prerequisite lists, with either $(wildcard) or $(shell). In certain circumstances this makefile anti-pattern could cause emake to produce “false positives” from the parse avoidance cache, such that emake would incorrectly use a previously cached parse result when it should have instead reparsed the makefile. In Accelerator 7.2 we’ve extended the #pragma cache syntax so that you can inform emake of the wildcard patterns to consider when determining cache suitability. This will enable even more users to enjoy the benefits of parse avoidance, without sacrificing reliability or performance. Usage instructions can be found in the Electric Make User’s Guide.

New platform support

Finally, with Accelerator 7.2 we’ve further expanded our already sweeping platform support to include RedHat Enterprise Linux 6.5, Ubuntu 13.04 and Windows Server 2012. This may seem like a modest increment, but I’m particularly excited about this update not for the what but for the who: you see, this is the first time that somebody other than myself made all the updates needed to support a new version of Linux, start to finish. With another set of hands to do that work we should be able to add support for new Linux platforms much more quickly in the future, which is welcome news indeed (thanks, Tim)!

What’s next?

Years ago, I thought that we would eventually get to the point that Accelerator was “done” and we’d have nothing left to do. How young and foolish I was! In reality, it seems that the TODO list only gets longer and longer. We’re still working hard on the “buddy cluster” concept, as well as Bitbake and ninja integration. And of course, we’re always working to improve performance — more on that in a future post.

ElectricAccelerator 7.2 is available immediately for existing customers. Contact support@electric-cloud.com to get the bits. New users can contact sales@electric-cloud.com for a evaluation.

post

The Twelve Days of Christmas, GNU make style

Well, it’s Christmas Day in the States today, and while we’re all recovering from the gift-opening festivities, I thought this would be the perfect time for a bit of fun with GNU make. And what better subject matter than the classic Christmas carol “The Twelve Days of Christmas”? Its repetitive structure is perfect for demonstrating how to use several of GNU make’s built-in functions for iteration, selection and sorting. This simple makefile prints the complete lyrics to the song:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
L01=Twelve drummers drumming,
L02=Eleven pipers piping,
L03=Ten lords-a-leaping,
L04=Nine ladies dancing,
L05=Eight maids-a-milking,
L06=Seven swans-a-swimming,
L07=Six geese-a-laying,
L08=Five golden rings,
L09=Four calling birds,
L10=Three french hens,
L11=Two turtle doves, and
L12=A partridge in a pear tree!
LINES=12 11 10 09 08 07 06 05 04 03 02 01
DAYS=twelfth eleventh tenth ninth \
eighth seventh sixth fifth \
fourth third second first
$(foreach n,$(LINES),\
$(if $(X),$(info ),$(eval X=X))\
$(info On the $(word $n,$(DAYS)) day of Christmas,)\
$(info my true love gave to me)\
$(foreach line,$(wordlist $n,12,$(sort $(LINES))),\
$(info $(L$(line)))))
all: ; @:

By count, most of the lines here just declare variables, one for each item mentioned in the song. Note how the items are ordered: the last item added is given the lowest index. That means that to construct each verse we simply enumerate every item in the list, in order, starting with the new item in each verse.

Line 18 is where the real meat of the makefile begins. Here we use GNU make’s foreach function to iterate through the verses. $(foreach) takes three arguments: a name for the iteration variable, a space-separated list of words to assign to the iteration variable in turn, and a body of text to expand repeatedly, once for each word in the list. Here, the list of words is given by LINES, which lists the starting line for each verse, in order — that is, the first verse starts from line 12, the second from line 11, etc. The text to expand on each iteration is all the text on lines 19-23 of the makefile — note the use of backslashes to continue each line to the next.

Line 19 uses several functions to print a blank line before starting the next verse, if we’ve printed a verse already: the $(if) function, which expands its second argument if its first argument is non-empty, and its third argument if its first argument is empty; the $(info) function to print a blank line; and the $(eval) function to set the flag variable. The first time this line is expanded, X does not exist, so it expands to an empty string and the $(if) picks the “else” branch. After that, X has a value, so the $(if) picks the “then” branch.

Lines 20 and 21 again use $(info) to print output — this time the prelude for the verse, like “On the first day of Christmas, my true love gave to me”. The ordinal for each day is pulled from DAYS using the $(word) function, which extracts a specified word, given by its first argument, from the space-separated list given as its second argument. Here we’re using n, the iteration variable from our initial $(foreach) as the selector for $(word).

Line 22 uses $(foreach) again, this time to iterate through the lines in the current verse. We use line as the iteration variable. The list of words is given again by LINES except now we’re using $(sort) to reverse the order, and $(wordlist) to select a subset of the lines. $(wordlist) takes three arguments: the index of the first word in the list to select, the index of the last word to select, and a space-separated list of words to select from. The indices are one-based, not zero-based, and $(wordlist) returns all the words in the given range. The body of this $(foreach) is just line 23, which uses $(info) once more to print the current line of the current verse.

Line 25 has the last bit of funny business in this makefile. We have to include a make rule in the makefile, or GNU make will complain *** No targets. Stop. after printing the lyrics. If we simply declare a rule with no commands, like all:, GNU make will complain Nothing to be done for `all’.. Therefore, we define a rule with a single “no-op” command that uses the bash built-in “:” to do nothing, combined with GNU make’s @ prefix to suppress printing the command itself.

And that’s it! Now you’ve got some experience with several of the built-in functions in GNU make — not bad for a Christmas day lark:

  • $(eval) for dynamic interpretation of text as makefile content
  • $(foreach), for iteration
  • $(if), for conditional expansion
  • $(info), for printing output
  • $(sort), for sorting a list
  • $(word), for selecting a single word from a list
  • $(wordlist), for selecting a range of words from a list

Now — where’s that figgy pudding? Merry Christmas!

post

UPDATE: SCons is Still Really Slow

A while back I posted a series of articles exploring the scalability of SCons, a popular Python-based build tool. In a nutshell, my experiments showed that SCons exhibits roughly quadratic growth in build runtimes as the number of targets increases:

Recently Dirk Baechle attempted to rebut my findings in an entry on the SCons wiki: Why SCons is not slow. I thought Dirk made some credible suggestions that could explain my results, and he did some smart things in his effort to invalidate my results. Unfortunately, his methods were flawed and his conclusions are invalid. My original results still stand: SCons really is slow. In the sections that follow I’ll share my own updated benchmarks and show where Dirk’s analysis went wrong.

Test setup

As before, I used genscons.pl to generate sample builds ranging from 2,000 to 50,000 targets. However, my test system was much beefier this time:

2013 2010
OS Linux Mint 14 (kernel version 3.5.0-17-generic) RedHat Desktop 3 (kernel version 2.4.21-58.ELsmp)
CPU Quad 1.7GHz Intel Core i7, hyperthreaded Dual 2.4GHz Intel Xeon, hyperthreaded
RAM 16 GB 2 GB
HD SSD (unknown)
SCons 2.3.0 1.2.0.r3842
Python 2.7.3 (system default) 2.6.2

Before running the tests, I rebooted the system to ensure there were no rogue processes consuming memory or CPU. I also forced the CPU cores into “performance” mode to ensure that they ran at their full 1.7GHz speed, rather than at the lower 933MHz they switch to when idle.

Revisiting the original benchmark

I think Dirk had two credible theories to explain the results I obtained in my original tests. First, Dirk wondered if those results may have been the result of virtual memory swapping — my original test system had relatively little RAM, and SCons itself uses a lot of memory. It’s plausible that physical memory was exhausted, forcing the OS to swap memory to disk. As Dirk said, “this would explain the increase of build times” — you bet it would! I don’t remember seeing any indication of memory swapping when I ran these tests originally, but to be honest it was nearly 4 years ago and perhaps my memory is not reliable. To eliminate this possibility, I ran the tests on a system with 16 GB RAM this time. During the tests I ran vmstat 5, which collects memory and swap usage information at five second intervals, and captured the result in a log.

Next, he suggested that I skewed the results by directing SCons to inherit the ambient environment, rather than using SCons’ default “sanitized” environment. That is, he felt I should have used env = Environment() rather than env = Environment(ENV = os.environ). To ensure that this was not a factor, I modified the tests so that they did not inherit the environment. At the same time, I substituted echo for the compiler and other commands, in order to make the tests faster. Besides, I’m not interested in benchmarking the compiler — just SCons! Here’s what my Environment declaration looks like now:

env = Environment(CC = 'echo', AR = 'echo', RANLIB = 'echo')

With these changes in place I reran my benchmarks. As expected, there was no change in the outcome. There is no doubt: SCons does not scale linearly. Instead the growth is polynomial, following an n1.85 curve. And thanks to the the vmstat output we can be certain that there was absolutely no swapping affecting the benchmarks. Here’s a graph of the results, including an n1.85 curve for comparison — notice that you can barely see that curve because it matches the observed data so well!

SCons full build runtime - click for larger view

For comparison, I used the SCons build log to make a shell script that executes the same series of echo commands. At 50,000 targets, the shell script ran in 1.097s. You read that right: 1.097s. Granted, the shell script doesn’t do stuff like up-to-date checks, etc., but still — of the 3,759s average SCons runtime, 3,758s — 99.97% — is SCons overhead.

I also created a non-recursive Makefile that “builds” the same targets with the same echo commands. This is a more realistic comparison to SCons — after all, nobody would dream of actually controlling a build with a straight-line shell script, but lots of people would use GNU make to do it. With 50,000 targets, GNU make ran for 82.469s — more than 45 times faster than SCons.

What is linear scaling?

If the performance problems are so obvious, why did Dirk fail to see them? Here’s a graph made from his test results:

SCons full build runtime, via D. Baechle - click for full size

Dirk says that this demonstrates “SCons’ linear scaling”. I find this statement baffling, because his data clearly shows that SCons does not scale linearly. It’s simple, really: linear scaling just means that the build time increases by the same amount for each new target you add, regardless of how many targets you already have. Put another way, it means that the difference in build time between 1,000 targets and 2,000 targets is exactly the same as the difference between 10,000 and 11,000 targets, or between 30,000 and 31,000 targets. Or, put yet another way, it means that when you plot the build time versus the number of targets, you should get a straight line with no change in slope at any point. Now you tell me: does that describe Dirk’s graph?

Here’s another version of that graph, this time augmented with a couple additional lines that show what the plot would look like if SCons were truly scaling linearly. The first projection is based on the original graph from 2,500 to 4,500 targets — that is, if we assume that SCons scales linearly and that the increase in build time between 2,500 and 4,500 targets is representative of the cost to add 2,000 more targets, then this line shows us how we should expect the build time to increase. Similarly, the second projection is based on the original graph between 4,500 and 8,500 targets. You can easily see that the actual data does not match either projection. Furthermore you can see that the slope of these projections is increasing:

SCons full build runtime with linear projections, via D. Baechle - click for full size

This shows the importance of testing at large scale when you’re trying to characterize the scalability of a system from empirical data. It can be difficult to differentiate polynomial from logarithmic or linear at low scales, especially once you incorporate the constant factors — polynomial algorithms can sometimes even give better absolute performance for small inputs than linear algorithms! It’s not until you plot enough data points at large enough values, as I’ve done, that it becomes easy to see and identify the curve.

What does profiling tell us?

Next, Dirk reran some of his tests under a profiler, on the very reasonable assumption that if there was a performance problem to be found, it would manifest in the profiling data — surely at least one function would demonstrate a larger-than-expected growth in runtime. Dirk only shared profiling data for two runs, both incremental builds, at 8,500 and 16,500 targets. That’s unfortunate for a couple reasons. First, the performance problem is less apparent on incremental builds than on full builds. Second, with only two datapoints it is literally not possible to determine whether growth is linear or polynomial. The results of Dirk’s profiling was negative: he found no “significant difference or increase” in any function.

Fortunately it’s easy to run this experiment myself. Dirk used cProfile, which is built-in to Python. To profile a Python script you can inject cProfile from the command-line, like this: python -m cProfile scons. Just before Python exits, cProfile dumps timing data for every function invoked during the run. I ran several full builds with the profiler enabled, from 2,000 to 20,000 targets. Then I sorted the profiling data by function internal time (time spent in the function exclusively, not in its descendents). In every run, the same two functions appeared at the top of the list: posix.waitpid and posix.fork. To be honest this was a surprise to me — previously I believed the problem was in SCons’ Taskmaster implementation. But I can’t really argue with the data. It makes sense that SCons would spend most of its time running and waiting for child processes to execute, and even that the amount of time spent in these functions would increase as the number of child processes increases. But look at the growth in runtimes in these two functions:

SCons full build function time, top two functions - click for full size

Like the overall build time, these curves are obviously non-linear. Armed with this knowledge, I went back to Dirk’s profiling data. To my surprise, posix.waitpid and posix.fork don’t even appear in Dirk’s data. On closer inspection, his data seems to include only a subset of all functions — about 600 functions, whereas my profiling data contains more than 1,500. I cannot explain this — perhaps Dirk filtered the results to exclude functions that are part of the Python library, assuming that the problem must be in SCons’ own code rather than in the library on which it is built.

This demonstrates a second fundamental principle of performance analysis: make sure that you consider all the data. Programmers’ intuition about performance problems is notoriously bad — even mine! — which is why it’s important to measure before acting. But measuring won’t help if you’re missing critical data or if you discard part of the data before doing any analysis.

Conclusions

On the surface, performance analysis seems like it should be simple: start a timer, run some code, stop the timer. Done correctly, performance analysis can illuminate the dark corners of your application’s performance. Done incorrectly — and there are many ways to do it incorrectly — it can lead you on a wild goose chase and cause you to squander resources fixing the wrong problems.

Dirk Baechle had good intentions when he set out to analyze SCons performance, but he made some mistakes in his process that led him to an erroneous conclusion. First, he didn’t run enough large-scale tests to really see the performance problem. Second, he filtered his experimental data in a way that obscured the existence of the problem. But perhaps his worst mistake was to start with a conclusion — that there is no performance problem — and then look for data to support it, rather than starting with the data and letting it impartially guide him to an evidence-based conclusion.

To me the evidence seems indisputable: SCons exhibits roughly quadratic growth in runtimes as the number of build targets increases, rendering it unusable for large-scale software development (tens of thousands of build outputs). There is no evidence that this is a result of virtual memory swapping. Profiling suggests a possible pair of culprits in posix.waitpid and posix.fork. I leave it to Dirk and the SCons team to investigate further; in the meantime, you can find my test harness and test results in my GitHub repo. If you can see a flaw in my methodology, sound off in the comments!

post

Halloween 2013 haunted graveyard

I know it’s a bit late for a retrospective on my annual Halloween tradition — the haunted graveyard. But there were a couple additions this year that I thought were worth mentioning, and I have some really excellent photos thanks to the efforts of a good friend so I figured “Better late than never!” For those who haven’t seen my previous articles about the graveyard, let me offer a brief backstory:

After my wife and I inherited her mother’s house several years ago, it became our responsibility to host the annual family Halloween party. Originally the party was intended for my mother-in-law’s adult children and their spouses, but once they started having kids of their own, the party shifted gears. It’s been primarily targeted at the kids as long as I’ve been involved. One of the marquee attractions is the haunted graveyard, where our backyard is transformed from a normal suburban plot into a spooky graveyard replete with zombies, ghosts and other monsters. As the man of the house, and because I have a bit of a macabre streak, it falls to me to design and … execute … the decorations. Every year the graveyard gets a little bit more elaborate as I snatch up more characters at the post-Halloween sales and devise better layouts and designs for the graveyard.

As in previous years, I used a short two-foot tall wooden fence to establish a perimeter for the graveyard. This serves two purposes. First, it provides a pathway for visitors to follow, so they aren’t just meandering through the graveyard. That allows me to better control the experience and ensure that people don’t approach characters from the wrong side. Second, it gives a little protection to the characters, to discourage children from physically abusing the decorations. You can see the fencing here (as usual, click on any of these pictures for a larger version):

2013 Graveyard Fence

Along with the fence you can see one of the architectural improvements in the graveyard: a dividing wall down the middle of the area. This serves to block view of the characters on the second half of the graveyard when you are walking down the first half. I fashioned this out of several pieces of 1 inch PVC pipe and PVC pipe connectors at a total cost of about $20 at the local hardware store. PVC’s a great choice for this because it’s cheap, lightweight and easy to work with — all you need is a hacksaw. It’s also easy to setup and tear down. This diagram shows the dimensions and parts I used to build the frame:

2013 Graveyard - Partition

The curtain is made from two black plastic tablecloths, the type you can find at a party supply store around Halloween time. The ends are folded over and stapled together to make sleeves into which the top and bottom PVC pipes can slide. I think the tablecloths cost about $8 each. To be honest the whole thing looks pretty cheap in the daylight, but once it gets dark it doesn’t matter. Next year I’ll probably blacken the PVC with some spraypaint. If I can find some cheap cloth, I’ll redo the curtains as well. If you do build something like this yourself, make sure that the feet are pretty wide, and put something heavy on each to hold it down. Otherwise one good gust of wind will knock your wall right over!

As for new characters this year, I have this creepy-as-hell “deady bear”. For such a small guy, he is surprisingly disturbing. When activated, his head turns side-to-side, one eyeball lights up, he stabs himself with a knife, and he makes seriously unsettling noises. Like most of the characters he’s sound activated:

2013 Graveyard - Deady Bear

I’ve also got this “zombie barrel”, who lights up, makes spooky sounds and rises up out of his barrel when activated. To be honest although he’s big and was expensive, it’s not one of my favorite characters. He’s difficult to setup, and he moves too slowly to be really scary:

2013 Graveyard - Zombie Barrel

Unfortunately I don’t have a picture of the third new character — this black jumping spider. This guy is great because he moves quickly, and he’s hard to see since he’s black and starts at ground level. It’s very startling — perfect for somebody coming around a corner.

Besides the partition and the new characters, there was one other upgrade that I’m really pleased with: the flying phantasm. I’ve actually had this character for a couple of years, but usually he just hangs “lifelessly” over the path:

2013 Graveyard - Flying Phantasm

This year I ran a 1/4 inch rope from my second-story roof to the fence at the edge of our property, then attached the ghost to a cheap pulley that I hung on the line:

2013 Graveyard - Phantasm Pulley

Once the line was taut, the ghost could “fly” down the line across the pathway, low enough that his tattered robe would brush against the head and shoulders of anybody walking by. Triggering the ghost was about as low-tech as it gets: somebody would watch from a second floor window behind the ghost and release him at suitable moments; a bit of string tied to his back allowed my cohort to pull him back up in preparation for the next run. After dark, the flying phantasm was a huge hit, even scaring some of the adults who went through the graveyard! Here’s a video of the daytime test run; you can see where I’ve got a bit of extra rope tied around the line to serve as a stop so that he doesn’t go too far:

That’s about it for new features this year. Overall I think this goes down as another success, but there are some things I’d like to improve for next year:

  • Get more foot activation pads! I cannot stress enough how critical these are. The sound activation on most of these characters just does not work very well, especially when people are creeping silently through (because they are too scared to make much noise!). One trick though: make sure the pads are covered up, otherwise they are a huge tip off that something is coming!
  • Improve the lighting! I have simultaneously too much and not enough light. The colored floods I’ve been using are too bright, which makes things less scary. But they also don’t provide enough light in the right places. For example, it was hard to see the “deady bear”. Next year I’ll use dimmer bulbs, and maybe get some of those little “hockey puck” style LED lights to provide target “up lighting” for specific characters. I’d love to hang one dim, naked bulb over the whole scene as well, that could just swing back and forth slowly — I think that would be really creepy looking.
  • Don’t forget the soundtrack! I have one of those “scary halloween sounds” albums that I’ve played in the past to good effect. I completely forgot to set that up this year.

Hope you enjoyed this post mortem! To wrap up, here are some pictures of the graveyard after dark — credit goes to my good friend Tim Murphy, who took most of the pictures used in this post, and who was a tremendous help in setting up the graveyard. See you next Halloween!

2013 Graveyard - Wide Angle

2013 Graveyard - Open Grave

2013 Graveyard - Orange Glow

2013 Graveyard - Spider

2013 Graveyard - Green Glow

2013 Graveyard - Zombie

2013 Graveyard - Zombie Barrel Up

post

The ElectricAccelerator 7.1 “Ship It!” Award

Well, it took a lot longer than I’d like, but at last I can reveal the Accelerator 7.1 “Ship It!” award. This is the fifth time I’ve commemorated our releases in this fashion, which I think is pretty cool itself.

Since this release again focused on performance, I picked a daring old-timey airplane pilot — the sort of guy you might have found behind the controls of a Sopwith Camel, with a maximum speed of about 115mph. Here’s the trading card that accompanied the figure:

Accelerator 7.1 "Ship It!" Card Front - click for larger version

Accelerator 7.1 “Ship It!” Card Front – click for larger version

Accelerator 7.1 "Ship It!" Card Back - click for larger version

Accelerator 7.1 “Ship It!” Card Back – click for larger version

I included release metrics again, but where the 7.0 card showed just 10 data points, the 7.1 card packs in a whopping 48 by including data for the 12 most recent releases across four categories;

  • Number of days in development.
  • JIRA issues closed.
  • Total KLOC. This metric gives the total size of the Accelerator code base in thousands of lines of code, as measured with the excellent Count Lines of Code utility by Al Danial. This measurement excludes comments and whitespace.
  • Change in KLOC. This is simply the arithmetic difference between the total KLOC for each release and its predecessor.

Again, my sincere gratitude goes to everybody on the Accelerator team. Well done and thank you!

%d bloggers like this: