HOWTO: ship a custom kernel driver for Linux

Pop quiz, hotshot: your company has developed a Linux kernel driver as part of its product offering. How do you deliver this driver such that your product is compatible with a significant majority of the Linux variants you are likely to encounter in the field? Consider the following:

  • RedHat Enterprise Linux 4 is based on kernel version 2.6.9
  • RHEL 5 is based on kernel version 2.6.18
  • RHEL 6 is based on 2.6.32
  • openSUSE 11.0 is based on 2.6.25
  • openSUSE 11.1 is based on 2.6.27
  • Ubuntu 9.04 is based on 2.6.28
  • Ubuntu 9.10 is based on 2.6.31
  • Ubuntu 10.04 is based on 2.6.32
  • Ubuntu 10.10 is based on 2.6.35

I could go on, but hopefully you get the point — “Linux” is not a single, identifiable entity, but rather a collection of related operating systems. And thus the question: how do you ship your driver such that you can install and use it on a broad spectrum of Linux variants? This is a problem that I’ve had to solve in my work.

Fundamentally, the solution is simple: ship the driver in source form. But that answer isn’t much help unless you can make your driver source-compatible with a wide range of kernel versions, spanning several years of Linux development. The solution to that problem is simple too, in hindsight, and yet I haven’t seen it used or described elsewhere: test for specific kernel features using something like a configure script; set preprocessor macros based on the results of the tests; and use the macros in the driver source to conditionally include code as needed. But before I get into the details of this solution, let’s look briefly at a few alternative solutions and why each was rejected.

Rejected alternatives: how NOT to ship a custom driver for Linux

Based on my informal survey of the state-of-the-art in this field, it seems there are three common approaches to solving this problem:

  1. Arrange for your driver to be bundled with the Linux kernel. If you can pull this off, fantastic! You’ve just outsourced the effort of porting your driver to the people who build and distribute the kernel. Unfortunately, kernel developers are not keen on bundling drivers that are not generally useful — that is, your driver has to have some utility outside of your specific application, or you can forget getting it bundled into the official kernel. Also, if you have any interesting IP in your driver, open-sourcing it is probably not an option.
  2. Prebuild your driver for every conceivable Linux variant. If you know which Linux variants your product will support, you could build the driver for each, then choose one of the prebuilt modules at installation time based on the information in /etc/issue and uname -r. VMWare uses this strategy — after installing VMWare Workstation, take a look in /usr/lib/vmware/modules/binary: you’ll find about a hundred different builds of their kernel modules, for various combinations of kernel versions, distributions and SMP-status. The trouble with this strategy is that it adds significant complexity to your build and release process: you need a build environment for every one of those variants. And all those modules bloat your install bundle. Finally, no matter how many distro’s you prebuild for, it will never be enough: somebody will come along and insist that your code install on their favorite variant.
  3. Ship source that uses the LINUX_VERSION_CODE and KERNEL_VERSION macros. These macros, defined by the Linux kernel build system, allow you to conditionally include code based on the version of the kernel being built. In theory this is all you need, if you know which version introduced a particular feature. But there are two big problems. First, you probably don’t know exactly which version introduced each feature. You could figure it out with some detective work, but who’s got the time to do that? Second, and far more troublesome, most enterprise Linux distributions (RHEL, SUSE, etc.) backport features and fixes from later kernels to their base kernel — without changing the value of LINUX_VERSION_CODE. Of course that renders this mechanism useless.

genconfig.sh: a configure script for kernel modules

Conceptually, genconfig.sh works the same way as an autoconf configure script: it uses a series of trivial test programs to check for different kernel features or constructs. The success or failure of each test to compile determines whether the corresponding feature is present, and by extension whether or not a particular bit of code ought to be included in the driver.

For example, in some versions of the Linux kernel (2.6.9, eg), struct inode includes a member called i_blksize. If present, this field should be set to the blocksize of the filesystem that owns the inode. It’s used in the implementation of the stat(2) system call. It’s a minor detail, but if you’re implementing a filesystem driver, it’s important to get it right.

We can determine whether or not to include code for this field by trying to compile a trivial kernel module containing just this code:

#include <linux/fs.h>
void dummy(void)
{
    struct inode i;
    i.i_blksize = 0;
    return;
}

If this code compiles, then we know to include code for managing the i_blksize field. We can create a header file containing a #define corresponding to this knowledge:

#define HAVE_INODE_I_BLKSIZE

Finally, the driver code uses that definition:

#ifdef HAVE_INODE_I_BLKSIZE
  inode->i_blksize = FS_BLOCKSIZE;
#endif

We can construct an equally trivial test case for each feature that is relevant to our driver. In the end we get a header with a series of defines, something like this:

#define HAVE_INODE_I_BLKSIZE
#define HAVE_3_ARG_INT_POSIX_TEST_LOCK
#define HAVE_KMEM_CACHE_T
#define HAVE_MODE_IN_VFS_SYMLINK
#define HAVE_3_ARG_PERMISSION
#define HAVE_2_ARG_UMOUNT_BEGIN
#define HAVE_PUT_INODE
#define HAVE_CLEANUP_IN_KMEM_CACHE_CREATE
#define HAVE_WRITE_BEGIN
#define HAVE_ADDRESS_SPACE_OPS_EXT
#define HAVE_SENDFILE
#define HAVE_DENTRY_IN_FSYNC

By referencing these definitions in the driver source code, we can make it source-compatible with a wide range of Linux kernel versions. To add support for a new kernel, we just have to determine which changes affect our module, write tests to check for those features, and update only the affected parts of our driver source.

This is more nimble, and far more manageable, than shipping prebuilt binaries for an endless litany of kernel variants. And it’s much more robust than relying on LINUX_VERSION_CODE: rather than implicitly trusting that a feature is present or absent based on an unreliable version string, we know for certain whether that feature is present, because we explicitly tried to use it.

Belt and suspenders: ensuring the driver works correctly

Now we have a strategy for shipping a driver that will build and load on a broad array of Linux variants. But this approach has introduced a new problem: how can we be sure that this driver that was just auto-configured and compiled on-the-fly will actually work as expected?

The solution to this problem has two components. First, we identified about a dozen specific Linux variants that are critical to our customers. The driver is exhaustively tested on each of these “tier 1” variants in every continuous integration build — over 3,000 automated unit tests are run against the driver on each. Of course, 12 variants is only a tiny fraction of the thousands of permutations that are possible, but by definition these variants represent the most important permutations to get right. We will know immediately if something has broken the driver on one of these variants.

Next, we ship a stripped down version of that unit test suite, and execute that automatically when the driver is built. This suite has only about 25 tests, but those tests cover every major piece of functionality — a reasonable compromise between coverage and simplicity. With this install-time test suite, we’ll know if there’s a problem with the driver on a particular platform as soon as somebody tries to install it.

Demonstration code

For demonstration purposes I have placed a trivial filesystem driver on my github repo. This driver, base0fs, was generated using the FiST filesystem generator, patched to make use of the genconfig.sh concept.

14 thoughts on “HOWTO: ship a custom kernel driver for Linux

    • I agree, it would have been nicer to use autoconf for this purpose, but I just couldn’t figure out how to do it — autoconf’s reputation for impenetrability is well known I think. Thank you for the link, I will definitely check out that project to see if I can leverage their work.

    • OK, just took a look at your link — that did nothing to counter my argument that autoconf is laughably complex for what is honestly a simple problem. Thanks, but no thanks. My simple script is about 600 lines of code, with each new feature test using about 15-20 lines of code. Their as-linux.m4 is nearly 1,000 lines of code, which each new feature test using about 15-20 lines of code. Plus, by using autoconf I’ve suddenly cut the number of people who can comprehend the script from “pretty much everybody” to “that one guy that knows autoconf”. So far I’m not seeing a significant advantage here.

      • Okay, well, to each his own, I certainly see the understandability problem (although, I’m that one guy that knows autoconf :).

        Also, could you be more explicit in declaring under which license you are releasing genconfig.sh?

      • genconfig.sh is covered by the GPLv2; I updated the comments in the file to make that clear. The rest of the base0fs code is likewise covered under GPLv2, although the situation is a little more complex there, as described in the COPYING file.

      • The as-linux.m4 could probably be rewritten in ~300 lines (just start by stripping the 2.4 and pre-Kbuild stuff…).

  1. Re point 1:

    “kernel developers are not keen on bundling drivers that are not generally useful — that is, your driver has to have some utility outside of your specific application, or you can forget getting it bundled into the official kernel.”

    Ouch! This is in direct contradiction to http://www.kroah.com/log/linux/ols_2006_keynote.html and http://www.kroah.com/log/linux/linux_driver_project_status-2009-06.html . Are you sure that it is the case that drivers will be rejected just because they are only of use to your application?

    • That’s a fair question, and to be honest I haven’t actually tried to get my (proprietary, closed-source) driver into the mainline kernel. Based on the links you provided it seems there’s reason to be cautiously optimistic about that prospect. Nevertheless, the concern about IP still stands, and I can list multiple companies that follow the same policy (eg, there is no MVFS driver in the mainline kernel, despite that being in use around the globe). Also, as I mentioned in another response, bundling the driver into the kernel doesn’t really solve my problem, since my application (and driver) revs more frequently than does the kernel — I would still have to solve this somehow, because the version in the base kernel would inevitably be out-of-date.

      Thanks for the info.

      • I should of course note that if you wished your driver to stay proprietary and closed-source then it wouldn’t be accepted into the mainline kernel but that I feel that is a different issue…

        I should have mentioned that my comment didn’t cover the IP issue at all – thanks for pointing it out.

  2. Why do you say that getting a driver upstream is hard? We have all kinds of drivers for very obscure hardware in the kernel. Merging into the mainline is by far the best way to distribute your driver to your users.

    • As you and others have pointed out, I may have overstated my case on that particular point. At the same time, my application revs more frequently than does the Linux kernel, so even if I could get my driver into the mainline kernel, I would still have to solve this problem, since the version of the driver would inevitably be out-of-date with respect to my newest release.

  3. The problem is that there are 100000 different distributions and no standardized way that is sane or makes sense.

    I’ve been a big proponent of Linux but the way how “decisions” are made is just idiotic (see HAL or other misdesigns, and don’t get me started on Xorg either.)

    There is, as far as I am concerned, only ONE true operating system and this is called EVERYTHING STRAIGHT FROM THE SOURCES.

    Screw the incompatible package shit – I don’t care about .rpm .deb or whatever other incompatibility layer they want to use. Rather than have your thousand package managers there should just be one and one manager alone.

    But hey, talking to distribution makers is worse than talking to a little kid … a little kid may occasionally LISTEN AND LEARN.

    • @mark: I think you’re making my argument for me — the only way to maintain your sanity in the face of all these different distros is something like the strategy I outlined here. Thanks for commenting!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.