The LTP test driver model

When we were designing the new LTP test library we ended up with something that could be called a test driver model. There is plenty of of reasons why tests should have well defined structure and why declarative style for test requirements and dependencies is better than conditions buried deep in a chain of function calls. Not only that simplifies the test code and helps the author to focus on the actual test code, this arrangement also helps to export the test metadata into a machine parsable format, which I've been writing about in my previous posts.

So how does LTP driver mode looks like? First of all LTP tests consists of three functions. There is a setup(), which is called once at the start of the test, a test() function which may be called repeatedly and a cleanup() that may be called asynchronously. Asynchronously means that the test library will call the cleanup() before the test exits, which may be triggered prematurely by a fatal failure in the middle of the test run.

Apart from these functions there are plenty of bit flags, strings, and arrays that describe what the test requires and what should be prepared before these functions are called. There are plenty of basic bit flags such as needs_root, needs_tmpdir, and so on. Then there are a bit more advanced flags that can prepare a block device prior to the test and clean it up automatically or run the test() function for all filesystems supported by a kernel.

Basic LTP test

#include "tst_test.h"

static void test(void)
{
        tst_res(TPASS, "Test passed");
}

static struct tst_test test = {
        .test_all = test,
};

This is the most basic LTP test that succeeds. There are a few default test parameters such as -i which allows the test to be executed repeatedly, so the test function has to be able to cope with that. The actual test also runs in a forked process while the test library process watches for timeouts and cleans up once the test is done. However all of that is invisible to the test itself.

Tests that runs on all supported filesystems

#include "tst_test.h"

#define MNTPOINT "mntpoint"

static void test(void)
{
        /* Actual test is done here and the device is mounted at MNTPOINT */
}

static void setup(void)
{
        tst_res(TINFO, "Running test on %s device %s filesystem",  
                tst_device.dev, tst_device.fs_type);
}

static struct tst_test test = {
        .setup = setup,
        .test_all = test,
        .mount_device = 1,
        .all_filesystems = 1,
        .mntpoint = MNTPOINT,
};

This is a bit more complex example of the test library. We ask for the test to be executed for each supported filesystem. Supported means that kernel is able mount it and mkfs is installed so that the device could be formatted. The test library also supports FUSE which needs a special handling since we need a kernel support for a fuse and a user-space binary that implements the filesystem.

Once the test is started the test library compiles a list of supported filesystems and creates unique test temporary directory because .needs_tmpdir is implied by .mount_device flag. For each filesystem the device is formatted and mounted at .mntpoint which is relative to the test temporary directory. After that a new process witch will run the test is forked. The main library process then starts a timeout timer and waits. The child, if set, runs the setup() function, which is usually used to populate the newly formatted device with files and so on. Then finally the test() function is executed, possibly repeatedly. Once the test is done the child, if set, runs the cleanup(). But since the device is mounted in the test library and cleaned up automatically as well as the temporary directory there is nothing to do. Also if the child that runs the test crashes the device is unmounted regardless.

You may ask where did we get the device for the test in the first place, most of the time this is a loopback device created by the test library as well, which is created at the start of the test automatically and released once it finishes as well.

This sums up a very shallow look at the test library, there is much more implemented in there, for example we do have a flag for restoring the system wall clock after a test, support for parsing kernel .config and many more, but I guess that I should stop now because the blog post is already longer than I wanted it to be.