post

The Twelve Days of Christmas, GNU make style

Well, it’s Christmas Day in the States today, and while we’re all recovering from the gift-opening festivities, I thought this would be the perfect time for a bit of fun with GNU make. And what better subject matter than the classic Christmas carol “The Twelve Days of Christmas”? Its repetitive structure is perfect for demonstrating how to use several of GNU make’s built-in functions for iteration, selection and sorting. This simple makefile prints the complete lyrics to the song:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
L01=Twelve drummers drumming,
L02=Eleven pipers piping,
L03=Ten lords-a-leaping,
L04=Nine ladies dancing,
L05=Eight maids-a-milking,
L06=Seven swans-a-swimming,
L07=Six geese-a-laying,
L08=Five golden rings,
L09=Four calling birds,
L10=Three french hens,
L11=Two turtle doves, and
L12=A partridge in a pear tree!
LINES=12 11 10 09 08 07 06 05 04 03 02 01
DAYS=twelfth eleventh tenth ninth \
eighth seventh sixth fifth \
fourth third second first
$(foreach n,$(LINES),\
$(if $(X),$(info ),$(eval X=X))\
$(info On the $(word $n,$(DAYS)) day of Christmas,)\
$(info my true love gave to me)\
$(foreach line,$(wordlist $n,12,$(sort $(LINES))),\
$(info $(L$(line)))))
all: ; @:

By count, most of the lines here just declare variables, one for each item mentioned in the song. Note how the items are ordered: the last item added is given the lowest index. That means that to construct each verse we simply enumerate every item in the list, in order, starting with the new item in each verse.

Line 18 is where the real meat of the makefile begins. Here we use GNU make’s foreach function to iterate through the verses. $(foreach) takes three arguments: a name for the iteration variable, a space-separated list of words to assign to the iteration variable in turn, and a body of text to expand repeatedly, once for each word in the list. Here, the list of words is given by LINES, which lists the starting line for each verse, in order — that is, the first verse starts from line 12, the second from line 11, etc. The text to expand on each iteration is all the text on lines 19-23 of the makefile — note the use of backslashes to continue each line to the next.

Line 19 uses several functions to print a blank line before starting the next verse, if we’ve printed a verse already: the $(if) function, which expands its second argument if its first argument is non-empty, and its third argument if its first argument is empty; the $(info) function to print a blank line; and the $(eval) function to set the flag variable. The first time this line is expanded, X does not exist, so it expands to an empty string and the $(if) picks the “else” branch. After that, X has a value, so the $(if) picks the “then” branch.

Lines 20 and 21 again use $(info) to print output — this time the prelude for the verse, like “On the first day of Christmas, my true love gave to me”. The ordinal for each day is pulled from DAYS using the $(word) function, which extracts a specified word, given by its first argument, from the space-separated list given as its second argument. Here we’re using n, the iteration variable from our initial $(foreach) as the selector for $(word).

Line 22 uses $(foreach) again, this time to iterate through the lines in the current verse. We use line as the iteration variable. The list of words is given again by LINES except now we’re using $(sort) to reverse the order, and $(wordlist) to select a subset of the lines. $(wordlist) takes three arguments: the index of the first word in the list to select, the index of the last word to select, and a space-separated list of words to select from. The indices are one-based, not zero-based, and $(wordlist) returns all the words in the given range. The body of this $(foreach) is just line 23, which uses $(info) once more to print the current line of the current verse.

Line 25 has the last bit of funny business in this makefile. We have to include a make rule in the makefile, or GNU make will complain *** No targets. Stop. after printing the lyrics. If we simply declare a rule with no commands, like all:, GNU make will complain Nothing to be done for `all’.. Therefore, we define a rule with a single “no-op” command that uses the bash built-in “:” to do nothing, combined with GNU make’s @ prefix to suppress printing the command itself.

And that’s it! Now you’ve got some experience with several of the built-in functions in GNU make — not bad for a Christmas day lark:

  • $(eval) for dynamic interpretation of text as makefile content
  • $(foreach), for iteration
  • $(if), for conditional expansion
  • $(info), for printing output
  • $(sort), for sorting a list
  • $(word), for selecting a single word from a list
  • $(wordlist), for selecting a range of words from a list

Now — where’s that figgy pudding? Merry Christmas!

post

UPDATE: SCons is Still Really Slow

A while back I posted a series of articles exploring the scalability of SCons, a popular Python-based build tool. In a nutshell, my experiments showed that SCons exhibits roughly quadratic growth in build runtimes as the number of targets increases:

Recently Dirk Baechle attempted to rebut my findings in an entry on the SCons wiki: Why SCons is not slow. I thought Dirk made some credible suggestions that could explain my results, and he did some smart things in his effort to invalidate my results. Unfortunately, his methods were flawed and his conclusions are invalid. My original results still stand: SCons really is slow. In the sections that follow I’ll share my own updated benchmarks and show where Dirk’s analysis went wrong.

Test setup

As before, I used genscons.pl to generate sample builds ranging from 2,000 to 50,000 targets. However, my test system was much beefier this time:

2013 2010
OS Linux Mint 14 (kernel version 3.5.0-17-generic) RedHat Desktop 3 (kernel version 2.4.21-58.ELsmp)
CPU Quad 1.7GHz Intel Core i7, hyperthreaded Dual 2.4GHz Intel Xeon, hyperthreaded
RAM 16 GB 2 GB
HD SSD (unknown)
SCons 2.3.0 1.2.0.r3842
Python 2.7.3 (system default) 2.6.2

Before running the tests, I rebooted the system to ensure there were no rogue processes consuming memory or CPU. I also forced the CPU cores into “performance” mode to ensure that they ran at their full 1.7GHz speed, rather than at the lower 933MHz they switch to when idle.

Revisiting the original benchmark

I think Dirk had two credible theories to explain the results I obtained in my original tests. First, Dirk wondered if those results may have been the result of virtual memory swapping — my original test system had relatively little RAM, and SCons itself uses a lot of memory. It’s plausible that physical memory was exhausted, forcing the OS to swap memory to disk. As Dirk said, “this would explain the increase of build times” — you bet it would! I don’t remember seeing any indication of memory swapping when I ran these tests originally, but to be honest it was nearly 4 years ago and perhaps my memory is not reliable. To eliminate this possibility, I ran the tests on a system with 16 GB RAM this time. During the tests I ran vmstat 5, which collects memory and swap usage information at five second intervals, and captured the result in a log.

Next, he suggested that I skewed the results by directing SCons to inherit the ambient environment, rather than using SCons’ default “sanitized” environment. That is, he felt I should have used env = Environment() rather than env = Environment(ENV = os.environ). To ensure that this was not a factor, I modified the tests so that they did not inherit the environment. At the same time, I substituted echo for the compiler and other commands, in order to make the tests faster. Besides, I’m not interested in benchmarking the compiler — just SCons! Here’s what my Environment declaration looks like now:

env = Environment(CC = 'echo', AR = 'echo', RANLIB = 'echo')

With these changes in place I reran my benchmarks. As expected, there was no change in the outcome. There is no doubt: SCons does not scale linearly. Instead the growth is polynomial, following an n1.85 curve. And thanks to the the vmstat output we can be certain that there was absolutely no swapping affecting the benchmarks. Here’s a graph of the results, including an n1.85 curve for comparison — notice that you can barely see that curve because it matches the observed data so well!

SCons full build runtime - click for larger view

For comparison, I used the SCons build log to make a shell script that executes the same series of echo commands. At 50,000 targets, the shell script ran in 1.097s. You read that right: 1.097s. Granted, the shell script doesn’t do stuff like up-to-date checks, etc., but still — of the 3,759s average SCons runtime, 3,758s — 99.97% — is SCons overhead.

I also created a non-recursive Makefile that “builds” the same targets with the same echo commands. This is a more realistic comparison to SCons — after all, nobody would dream of actually controlling a build with a straight-line shell script, but lots of people would use GNU make to do it. With 50,000 targets, GNU make ran for 82.469s — more than 45 times faster than SCons.

What is linear scaling?

If the performance problems are so obvious, why did Dirk fail to see them? Here’s a graph made from his test results:

SCons full build runtime, via D. Baechle - click for full size

Dirk says that this demonstrates “SCons’ linear scaling”. I find this statement baffling, because his data clearly shows that SCons does not scale linearly. It’s simple, really: linear scaling just means that the build time increases by the same amount for each new target you add, regardless of how many targets you already have. Put another way, it means that the difference in build time between 1,000 targets and 2,000 targets is exactly the same as the difference between 10,000 and 11,000 targets, or between 30,000 and 31,000 targets. Or, put yet another way, it means that when you plot the build time versus the number of targets, you should get a straight line with no change in slope at any point. Now you tell me: does that describe Dirk’s graph?

Here’s another version of that graph, this time augmented with a couple additional lines that show what the plot would look like if SCons were truly scaling linearly. The first projection is based on the original graph from 2,500 to 4,500 targets — that is, if we assume that SCons scales linearly and that the increase in build time between 2,500 and 4,500 targets is representative of the cost to add 2,000 more targets, then this line shows us how we should expect the build time to increase. Similarly, the second projection is based on the original graph between 4,500 and 8,500 targets. You can easily see that the actual data does not match either projection. Furthermore you can see that the slope of these projections is increasing:

SCons full build runtime with linear projections, via D. Baechle - click for full size

This shows the importance of testing at large scale when you’re trying to characterize the scalability of a system from empirical data. It can be difficult to differentiate polynomial from logarithmic or linear at low scales, especially once you incorporate the constant factors — polynomial algorithms can sometimes even give better absolute performance for small inputs than linear algorithms! It’s not until you plot enough data points at large enough values, as I’ve done, that it becomes easy to see and identify the curve.

What does profiling tell us?

Next, Dirk reran some of his tests under a profiler, on the very reasonable assumption that if there was a performance problem to be found, it would manifest in the profiling data — surely at least one function would demonstrate a larger-than-expected growth in runtime. Dirk only shared profiling data for two runs, both incremental builds, at 8,500 and 16,500 targets. That’s unfortunate for a couple reasons. First, the performance problem is less apparent on incremental builds than on full builds. Second, with only two datapoints it is literally not possible to determine whether growth is linear or polynomial. The results of Dirk’s profiling was negative: he found no “significant difference or increase” in any function.

Fortunately it’s easy to run this experiment myself. Dirk used cProfile, which is built-in to Python. To profile a Python script you can inject cProfile from the command-line, like this: python -m cProfile scons. Just before Python exits, cProfile dumps timing data for every function invoked during the run. I ran several full builds with the profiler enabled, from 2,000 to 20,000 targets. Then I sorted the profiling data by function internal time (time spent in the function exclusively, not in its descendents). In every run, the same two functions appeared at the top of the list: posix.waitpid and posix.fork. To be honest this was a surprise to me — previously I believed the problem was in SCons’ Taskmaster implementation. But I can’t really argue with the data. It makes sense that SCons would spend most of its time running and waiting for child processes to execute, and even that the amount of time spent in these functions would increase as the number of child processes increases. But look at the growth in runtimes in these two functions:

SCons full build function time, top two functions - click for full size

Like the overall build time, these curves are obviously non-linear. Armed with this knowledge, I went back to Dirk’s profiling data. To my surprise, posix.waitpid and posix.fork don’t even appear in Dirk’s data. On closer inspection, his data seems to include only a subset of all functions — about 600 functions, whereas my profiling data contains more than 1,500. I cannot explain this — perhaps Dirk filtered the results to exclude functions that are part of the Python library, assuming that the problem must be in SCons’ own code rather than in the library on which it is built.

This demonstrates a second fundamental principle of performance analysis: make sure that you consider all the data. Programmers’ intuition about performance problems is notoriously bad — even mine! — which is why it’s important to measure before acting. But measuring won’t help if you’re missing critical data or if you discard part of the data before doing any analysis.

Conclusions

On the surface, performance analysis seems like it should be simple: start a timer, run some code, stop the timer. Done correctly, performance analysis can illuminate the dark corners of your application’s performance. Done incorrectly — and there are many ways to do it incorrectly — it can lead you on a wild goose chase and cause you to squander resources fixing the wrong problems.

Dirk Baechle had good intentions when he set out to analyze SCons performance, but he made some mistakes in his process that led him to an erroneous conclusion. First, he didn’t run enough large-scale tests to really see the performance problem. Second, he filtered his experimental data in a way that obscured the existence of the problem. But perhaps his worst mistake was to start with a conclusion — that there is no performance problem — and then look for data to support it, rather than starting with the data and letting it impartially guide him to an evidence-based conclusion.

To me the evidence seems indisputable: SCons exhibits roughly quadratic growth in runtimes as the number of build targets increases, rendering it unusable for large-scale software development (tens of thousands of build outputs). There is no evidence that this is a result of virtual memory swapping. Profiling suggests a possible pair of culprits in posix.waitpid and posix.fork. I leave it to Dirk and the SCons team to investigate further; in the meantime, you can find my test harness and test results in my GitHub repo. If you can see a flaw in my methodology, sound off in the comments!

post

Halloween 2013 haunted graveyard

I know it’s a bit late for a retrospective on my annual Halloween tradition — the haunted graveyard. But there were a couple additions this year that I thought were worth mentioning, and I have some really excellent photos thanks to the efforts of a good friend so I figured “Better late than never!” For those who haven’t seen my previous articles about the graveyard, let me offer a brief backstory:

After my wife and I inherited her mother’s house several years ago, it became our responsibility to host the annual family Halloween party. Originally the party was intended for my mother-in-law’s adult children and their spouses, but once they started having kids of their own, the party shifted gears. It’s been primarily targeted at the kids as long as I’ve been involved. One of the marquee attractions is the haunted graveyard, where our backyard is transformed from a normal suburban plot into a spooky graveyard replete with zombies, ghosts and other monsters. As the man of the house, and because I have a bit of a macabre streak, it falls to me to design and … execute … the decorations. Every year the graveyard gets a little bit more elaborate as I snatch up more characters at the post-Halloween sales and devise better layouts and designs for the graveyard.

As in previous years, I used a short two-foot tall wooden fence to establish a perimeter for the graveyard. This serves two purposes. First, it provides a pathway for visitors to follow, so they aren’t just meandering through the graveyard. That allows me to better control the experience and ensure that people don’t approach characters from the wrong side. Second, it gives a little protection to the characters, to discourage children from physically abusing the decorations. You can see the fencing here (as usual, click on any of these pictures for a larger version):

2013 Graveyard Fence

Along with the fence you can see one of the architectural improvements in the graveyard: a dividing wall down the middle of the area. This serves to block view of the characters on the second half of the graveyard when you are walking down the first half. I fashioned this out of several pieces of 1 inch PVC pipe and PVC pipe connectors at a total cost of about $20 at the local hardware store. PVC’s a great choice for this because it’s cheap, lightweight and easy to work with — all you need is a hacksaw. It’s also easy to setup and tear down. This diagram shows the dimensions and parts I used to build the frame:

2013 Graveyard - Partition

The curtain is made from two black plastic tablecloths, the type you can find at a party supply store around Halloween time. The ends are folded over and stapled together to make sleeves into which the top and bottom PVC pipes can slide. I think the tablecloths cost about $8 each. To be honest the whole thing looks pretty cheap in the daylight, but once it gets dark it doesn’t matter. Next year I’ll probably blacken the PVC with some spraypaint. If I can find some cheap cloth, I’ll redo the curtains as well. If you do build something like this yourself, make sure that the feet are pretty wide, and put something heavy on each to hold it down. Otherwise one good gust of wind will knock your wall right over!

As for new characters this year, I have this creepy-as-hell “deady bear”. For such a small guy, he is surprisingly disturbing. When activated, his head turns side-to-side, one eyeball lights up, he stabs himself with a knife, and he makes seriously unsettling noises. Like most of the characters he’s sound activated:

2013 Graveyard - Deady Bear

I’ve also got this “zombie barrel”, who lights up, makes spooky sounds and rises up out of his barrel when activated. To be honest although he’s big and was expensive, it’s not one of my favorite characters. He’s difficult to setup, and he moves too slowly to be really scary:

2013 Graveyard - Zombie Barrel

Unfortunately I don’t have a picture of the third new character — this black jumping spider. This guy is great because he moves quickly, and he’s hard to see since he’s black and starts at ground level. It’s very startling — perfect for somebody coming around a corner.

Besides the partition and the new characters, there was one other upgrade that I’m really pleased with: the flying phantasm. I’ve actually had this character for a couple of years, but usually he just hangs “lifelessly” over the path:

2013 Graveyard - Flying Phantasm

This year I ran a 1/4 inch rope from my second-story roof to the fence at the edge of our property, then attached the ghost to a cheap pulley that I hung on the line:

2013 Graveyard - Phantasm Pulley

Once the line was taut, the ghost could “fly” down the line across the pathway, low enough that his tattered robe would brush against the head and shoulders of anybody walking by. Triggering the ghost was about as low-tech as it gets: somebody would watch from a second floor window behind the ghost and release him at suitable moments; a bit of string tied to his back allowed my cohort to pull him back up in preparation for the next run. After dark, the flying phantasm was a huge hit, even scaring some of the adults who went through the graveyard! Here’s a video of the daytime test run; you can see where I’ve got a bit of extra rope tied around the line to serve as a stop so that he doesn’t go too far:

That’s about it for new features this year. Overall I think this goes down as another success, but there are some things I’d like to improve for next year:

  • Get more foot activation pads! I cannot stress enough how critical these are. The sound activation on most of these characters just does not work very well, especially when people are creeping silently through (because they are too scared to make much noise!). One trick though: make sure the pads are covered up, otherwise they are a huge tip off that something is coming!
  • Improve the lighting! I have simultaneously too much and not enough light. The colored floods I’ve been using are too bright, which makes things less scary. But they also don’t provide enough light in the right places. For example, it was hard to see the “deady bear”. Next year I’ll use dimmer bulbs, and maybe get some of those little “hockey puck” style LED lights to provide target “up lighting” for specific characters. I’d love to hang one dim, naked bulb over the whole scene as well, that could just swing back and forth slowly — I think that would be really creepy looking.
  • Don’t forget the soundtrack! I have one of those “scary halloween sounds” albums that I’ve played in the past to good effect. I completely forgot to set that up this year.

Hope you enjoyed this post mortem! To wrap up, here are some pictures of the graveyard after dark — credit goes to my good friend Tim Murphy, who took most of the pictures used in this post, and who was a tremendous help in setting up the graveyard. See you next Halloween!

2013 Graveyard - Wide Angle

2013 Graveyard - Open Grave

2013 Graveyard - Orange Glow

2013 Graveyard - Spider

2013 Graveyard - Green Glow

2013 Graveyard - Zombie

2013 Graveyard - Zombie Barrel Up

post

The ElectricAccelerator 7.1 “Ship It!” Award

Well, it took a lot longer than I’d like, but at last I can reveal the Accelerator 7.1 “Ship It!” award. This is the fifth time I’ve commemorated our releases in this fashion, which I think is pretty cool itself.

Since this release again focused on performance, I picked a daring old-timey airplane pilot — the sort of guy you might have found behind the controls of a Sopwith Camel, with a maximum speed of about 115mph. Here’s the trading card that accompanied the figure:

Accelerator 7.1 "Ship It!" Card Front - click for larger version

Accelerator 7.1 “Ship It!” Card Front – click for larger version

Accelerator 7.1 "Ship It!" Card Back - click for larger version

Accelerator 7.1 “Ship It!” Card Back – click for larger version

I included release metrics again, but where the 7.0 card showed just 10 data points, the 7.1 card packs in a whopping 48 by including data for the 12 most recent releases across four categories;

  • Number of days in development.
  • JIRA issues closed.
  • Total KLOC. This metric gives the total size of the Accelerator code base in thousands of lines of code, as measured with the excellent Count Lines of Code utility by Al Danial. This measurement excludes comments and whitespace.
  • Change in KLOC. This is simply the arithmetic difference between the total KLOC for each release and its predecessor.

Again, my sincere gratitude goes to everybody on the Accelerator team. Well done and thank you!

%d bloggers like this: