post

What’s new in ElectricAccelerator 5.4.0

This month, Electric Cloud announced the release of ElectricAccelerator 5.4. This version adds a lot of great new features, including support for GNU Make’s .SECONDEXPANSION feature and the use of $(eval) in rule bodies, and compatibility with Cygwin 1.7.7. In addition to those long-awaited improvements, here are the things that I’m most excited about in this release:

New cluster utilization reports

Accelerator 5.4 includes two new reports designed to give you greater insight into the load on and utilization of your cluster: the Cluster Utilization report and the Sealevel report:

The Cluster Utilization report shows, over the course of a typical day, the average number of builds running and the average combined agent demand from all running builds. The Sealevel report shows the raw agent demand data, plotted over the course of a day. The colored bands correspond to various cluster sizes, including the current cluster size and several hypothetical sizes, so you can see at a glance how large you need to make the cluster in order to satisfy all the agent requests. The percentages on the right side of the graph indicate the portion of agent requests that are left unsatisfied with a cluster of the given size. In the example above, all but 1% of agent requests would be satisfied if the cluster had 40 agents.

Reduced directory creation conflicts

Raise your hand if you’ve ever seen this pattern in a makefile:

%.o: %.c
        @mkdir -p $(dir $@)
        @$(COMPILE.c) -o $@ $<

It’s a common way to ensure the output directory exists before trying to create a file in it. Unfortunately, with a strict application of Accelerator’s conflict detection algorithm, this pattern causes numerous conflicts and poor performance when the build is run without an up-to-date history file. In Accelerator 5.4.0, we improved the algorithm so that this common case is no longer considered a conflict. If you always run with a good history file, this change will not be helpful to you. But sometimes that’s not possible — for example, if you’re building third-party code that’s just gotten a major update — then you’re going to really love this improvement. The Android source code is a perfect example: a from-scratch no-history build of the Gingerbread base used to take 144 minutes. Now it runs in just 22 minutes on the same hardware — 6.5x faster.

New Linux sandbox implementation

The last feature I want to mention here is the new sandbox implementation for Linux. The sandbox is the means by which Accelerator is able to present a different view of the filesystem, from a different point of time during the build, to each of the jobs running concurrently on a given agent host. Without the sandbox, it would be impossible on Linux to simultaneously represent a given file as existent to one job, and non-existent to another.

In previous versions of Accelerator, the Linux sandbox implementation was effective, but ultimately limited in its capabilities. Chief among those limitations was an inability to interoperate with autofs 5.x. There were several workarounds available, but each of those in turn had its own shortcomings.

Accelerator 5.4 uses a different underlying technology to implement the sandbox component: lofs, the loopback filesystem. This is a concept borrowed from Solaris, which has had a vendor-supplied version for years; Linux has nothing that matches the depth of functionality provided by Solaris, so we wrote our own. The net result of this effort is that the limitations of the previous implementation have been entirely eliminated. In particular, Accelerator 5.4 can interoperate with autofs 5.x without the need for any workarounds or awkward configuration.

Afterthoughts

It’s been a long time in coming, but I think it was well worth the wait. I’m very proud to have been part of this product release, and I’m thrilled with the work my team has put into it.

Accelerator 5.4 is available immediately for current customers. New customers should contact sales@electric-cloud.com.

post

HOWTO: ship a custom kernel driver for Linux

Pop quiz, hotshot: your company has developed a Linux kernel driver as part of its product offering. How do you deliver this driver such that your product is compatible with a significant majority of the Linux variants you are likely to encounter in the field? Consider the following:

  • RedHat Enterprise Linux 4 is based on kernel version 2.6.9
  • RHEL 5 is based on kernel version 2.6.18
  • RHEL 6 is based on 2.6.32
  • openSUSE 11.0 is based on 2.6.25
  • openSUSE 11.1 is based on 2.6.27
  • Ubuntu 9.04 is based on 2.6.28
  • Ubuntu 9.10 is based on 2.6.31
  • Ubuntu 10.04 is based on 2.6.32
  • Ubuntu 10.10 is based on 2.6.35

I could go on, but hopefully you get the point — “Linux” is not a single, identifiable entity, but rather a collection of related operating systems. And thus the question: how do you ship your driver such that you can install and use it on a broad spectrum of Linux variants? This is a problem that I’ve had to solve in my work.

Fundamentally, the solution is simple: ship the driver in source form. But that answer isn’t much help unless you can make your driver source-compatible with a wide range of kernel versions, spanning several years of Linux development. The solution to that problem is simple too, in hindsight, and yet I haven’t seen it used or described elsewhere: test for specific kernel features using something like a configure script; set preprocessor macros based on the results of the tests; and use the macros in the driver source to conditionally include code as needed. But before I get into the details of this solution, let’s look briefly at a few alternative solutions and why each was rejected.

Rejected alternatives: how NOT to ship a custom driver for Linux

Based on my informal survey of the state-of-the-art in this field, it seems there are three common approaches to solving this problem:

  1. Arrange for your driver to be bundled with the Linux kernel. If you can pull this off, fantastic! You’ve just outsourced the effort of porting your driver to the people who build and distribute the kernel. Unfortunately, kernel developers are not keen on bundling drivers that are not generally useful — that is, your driver has to have some utility outside of your specific application, or you can forget getting it bundled into the official kernel. Also, if you have any interesting IP in your driver, open-sourcing it is probably not an option.
  2. Prebuild your driver for every conceivable Linux variant. If you know which Linux variants your product will support, you could build the driver for each, then choose one of the prebuilt modules at installation time based on the information in /etc/issue and uname -r. VMWare uses this strategy — after installing VMWare Workstation, take a look in /usr/lib/vmware/modules/binary: you’ll find about a hundred different builds of their kernel modules, for various combinations of kernel versions, distributions and SMP-status. The trouble with this strategy is that it adds significant complexity to your build and release process: you need a build environment for every one of those variants. And all those modules bloat your install bundle. Finally, no matter how many distro’s you prebuild for, it will never be enough: somebody will come along and insist that your code install on their favorite variant.
  3. Ship source that uses the LINUX_VERSION_CODE and KERNEL_VERSION macros. These macros, defined by the Linux kernel build system, allow you to conditionally include code based on the version of the kernel being built. In theory this is all you need, if you know which version introduced a particular feature. But there are two big problems. First, you probably don’t know exactly which version introduced each feature. You could figure it out with some detective work, but who’s got the time to do that? Second, and far more troublesome, most enterprise Linux distributions (RHEL, SUSE, etc.) backport features and fixes from later kernels to their base kernel — without changing the value of LINUX_VERSION_CODE. Of course that renders this mechanism useless.

genconfig.sh: a configure script for kernel modules

Conceptually, genconfig.sh works the same way as an autoconf configure script: it uses a series of trivial test programs to check for different kernel features or constructs. The success or failure of each test to compile determines whether the corresponding feature is present, and by extension whether or not a particular bit of code ought to be included in the driver.

For example, in some versions of the Linux kernel (2.6.9, eg), struct inode includes a member called i_blksize. If present, this field should be set to the blocksize of the filesystem that owns the inode. It’s used in the implementation of the stat(2) system call. It’s a minor detail, but if you’re implementing a filesystem driver, it’s important to get it right.

We can determine whether or not to include code for this field by trying to compile a trivial kernel module containing just this code:

#include <linux/fs.h>
void dummy(void)
{
    struct inode i;
    i.i_blksize = 0;
    return;
}

If this code compiles, then we know to include code for managing the i_blksize field. We can create a header file containing a #define corresponding to this knowledge:

#define HAVE_INODE_I_BLKSIZE

Finally, the driver code uses that definition:

#ifdef HAVE_INODE_I_BLKSIZE
  inode->i_blksize = FS_BLOCKSIZE;
#endif

We can construct an equally trivial test case for each feature that is relevant to our driver. In the end we get a header with a series of defines, something like this:

#define HAVE_INODE_I_BLKSIZE
#define HAVE_3_ARG_INT_POSIX_TEST_LOCK
#define HAVE_KMEM_CACHE_T
#define HAVE_MODE_IN_VFS_SYMLINK
#define HAVE_3_ARG_PERMISSION
#define HAVE_2_ARG_UMOUNT_BEGIN
#define HAVE_PUT_INODE
#define HAVE_CLEANUP_IN_KMEM_CACHE_CREATE
#define HAVE_WRITE_BEGIN
#define HAVE_ADDRESS_SPACE_OPS_EXT
#define HAVE_SENDFILE
#define HAVE_DENTRY_IN_FSYNC

By referencing these definitions in the driver source code, we can make it source-compatible with a wide range of Linux kernel versions. To add support for a new kernel, we just have to determine which changes affect our module, write tests to check for those features, and update only the affected parts of our driver source.

This is more nimble, and far more manageable, than shipping prebuilt binaries for an endless litany of kernel variants. And it’s much more robust than relying on LINUX_VERSION_CODE: rather than implicitly trusting that a feature is present or absent based on an unreliable version string, we know for certain whether that feature is present, because we explicitly tried to use it.

Belt and suspenders: ensuring the driver works correctly

Now we have a strategy for shipping a driver that will build and load on a broad array of Linux variants. But this approach has introduced a new problem: how can we be sure that this driver that was just auto-configured and compiled on-the-fly will actually work as expected?

The solution to this problem has two components. First, we identified about a dozen specific Linux variants that are critical to our customers. The driver is exhaustively tested on each of these “tier 1” variants in every continuous integration build — over 3,000 automated unit tests are run against the driver on each. Of course, 12 variants is only a tiny fraction of the thousands of permutations that are possible, but by definition these variants represent the most important permutations to get right. We will know immediately if something has broken the driver on one of these variants.

Next, we ship a stripped down version of that unit test suite, and execute that automatically when the driver is built. This suite has only about 25 tests, but those tests cover every major piece of functionality — a reasonable compromise between coverage and simplicity. With this install-time test suite, we’ll know if there’s a problem with the driver on a particular platform as soon as somebody tries to install it.

Demonstration code

For demonstration purposes I have placed a trivial filesystem driver on my github repo. This driver, base0fs, was generated using the FiST filesystem generator, patched to make use of the genconfig.sh concept.

%d bloggers like this: