people.kernel.org

Reader

Read the latest posts from people.kernel.org.

from kees

How to modernize C arrays for greater memory safety: a case-study in refactoring the Linux kernel and a look to the future

Kees Cook

C is not just a fancy assembler any more

Large projects written in C, especially those written close to the hardware layer like Linux, have long treated the language as a high-level assembler. Using C allowed for abstracting away much of the difficulty of writing directly in machine code while still providing easy low-level access to memory, registers, and CPU features. However, C has matured over the last half century, and many language features that improve robustness go unused in older codebases. This is especially true for arrays, where the historical lack of bounds checking has been a consistent source of security flaws.

Converting such codebases to use “modern” language features, like those in C99 (still from the prior millennium), can be a major challenge, but it is an entirely tractable problem. This post is a deep dive into an effort underway in the Linux kernel to make array index overflows (and more generally, buffer overflows) a thing of the past, where they belong. Our success hinges on replacing anachronistic array definitions with well-defined C99 flexible arrays. This approach can be used by developers to refactor C code, making it possible to leverage 21st century mitigations (like -fsanitize=bounds and FORTIFY_SOURCE), since such things can finally be cleanly applied to the modernized codebase.

The fraught history of arrays in C

For the compiler to successfully apply array index bounds checking, array sizes must be defined unambiguously, which is not always easy in C. Depending on the array definition, bounds checking falls roughly into three categories: fixed-sized arrays, dynamically-sized arrays, and pointer offsets. Each category of array definitions must be made unambiguous before the next, as they mostly build on top of each other. For example, if the compiler cannot protect a fixed-sized array, it certainly cannot protect a dynamically-sized array, and array indexing is just a specialized case of calculating a memory pointer offset.

Properly defined dynamically-sized arrays were introduced in C99 (int foo[]), and called “flexible arrays”. Before that, many C projects used the GNU extension of zero-length arrays (int foo[0]), which is not recognized by the C standard. This was done because, before the GNU extension, C projects would use single-element arrays (int foo[1]) which had several frustrating characteristics. (Using sizeof() on such a structure would include a single element as well, which would require additional handling to get allocation sizes to be accurate. This is not a problem for zero-element or true flexible arrays.)

However, due to yet more historical situations (e.g. struct sockaddr, which has a fixed-size trailing array that is not supposed to actually be treated as fixed-size), GCC and Clang actually treat all trailing arrays as flexible arrays. This behavior makes things even more problematic, since it becomes impossible to limit a flexible array heuristic to only 1-element or 0-element (i.e. zero-length) arrays. For example, a compiler can't tell the intent of variable's use here:

struct obj {
        ...
        unsigned char bytes;
        int variable[4];
};

Is it actually a 4 element array, or is it sized by the bytes member? As such, compilers have had to assume that trailing arrays must be intended to be dynamically sized (even though most are intended to be fixed-size).

To clear the way for sensible protection of fixed-size arrays, and to have a common framework for handling dynamically-sized arrays, Linux must have all the “fake” flexible array members replaced with actual C99 flexible array members so that the programmer's intent can actually be represented in an unambiguous way. With this done, -Warray-bounds (and similar things like __builtin_object_size()) will catch compile-time problems, and -fsanitize=bounds (and similar things like __builtin_dynamic_object_size()) can catch run-time problems.

Once fixed-sized arrays are protected, dynamically sized arrays can be protected as well, though this requires introducing a way to annotate structures that contain flexible arrays. Nearly all such structs also contain the count of allocated elements present in the flexible array:

struct obj {
        ...
        unsigned short count;
        struct foo items[]; /* Has "count" many "struct foo"s */
} *ptr;

Such structs therefore fully describe their contents at runtime (and are called “flexible array structures” from here on). In other words, their size can be determined at run-time as:

sizeof(*ptr) + sizeof(*ptr->items) * ptr->count

Teaching the compiler which struct member is associated with the count of a given flexible array member will allow -fsanitize=bounds and __builtin_dynamic_object_size() to reason about flexible array structure usage as well, covering all arrays in Linux with “known bounds”.

(Not covered here is the closely related work to tighten the FORTIFY_SOURCE implementation for the memcpy()-family of functions which also depends on making flexible array sizes unambiguous.)

Replacing “fake” flexible arrays

Compile-time diagnostics about the size of arrays use either internal value range checking or things similar to the FORTIFY_SOURCE macros (which use __builtin_object_size() for their implementations). This works well for arrays not at the end of the structure, but gets disabled for trailing arrays since the compiler must treat trailing arrays as flexible arrays (see struct sockaddr above). And for everything treated as a flexible array (i.e. dynamically sized), the compiler cannot know the array length at compile time, since it will be only known at runtime. To make such array declarations unambiguous (and therefore able to gain sane runtime bounds checking), compilers must gain an option to disable all “fake” flexible array heuristics, and treat only true flexible arrays as flexible arrays.

The creation of -fstrict-flex-arrays is now available in recent GCC and Clang builds, but any project using it will need to replace all fake flexible arrays with true flexible arrays first (to separate them from any fixed-size trailing arrays). This comes with several challenges.

Replace 0-length arrays

Most replacement of 0-length arrays with flexible arrays requires no special handling. Simply removing the “0” in the array declaration is sufficient. For example,

struct obj {
        ...
        int flex[0];
};

becomes:

struct obj {
        ...
        int flex[];

};

However, there are a few things of note that can go wrong with these conversions:

Changes to sizeof()

While sizeof(instance->flex) for a 0-length array returns 0, it becomes a compile-time failure once it becomes a true flexible array. This usually manifests within other complex macros that are examining the details of a given struct, and are usually hidden bugs that switching to a flexible array helps expose.

Pass by value

Converting to a true flexible array will expose any strange cases of trying to pass a flexible array struct by value. These are almost always a bug, so it's another case where a problem is exposed by cleaning up fake flexible arrays. For example:

net/core/flow_dissector.c: In function 'is_pppoe_ses_hdr_valid':
net/core/flow_dissector.c:898:13: note: the ABI of passing struct with a flexible array member has changed in GCC 4.4

898 | static bool is_pppoe_ses_hdr_valid(struct pppoe_hdr hdr)
    |                                   ^~~~~~~~~~~~~~~~~~~~~~

Flexible arrays in unions

C99 6.7.2.1 “Structure and union specifiers” #16 declares true flexible arrays may not be in unions nor otherwise empty structures: “As a special case, the last element of a structure with more than one named member may have an incomplete array type; this is called a flexible array member.”

However, this situation is allowed by the GNU “trailing array” extension, where such arrays are treated as flexible arrays. More importantly, flexible arrays (via the GNU extension) are used in unions in many places throughout Linux code. The C99 treatment of true flexible arrays appears to be only a definitional limitation (and likely just an oversight) since the restriction can be worked around with creative use of anonymous structs. For example, this will build:

struct obj {
        ...
        union {
                struct foo name1[0];
                struct bar name2[0];
        };
};

but this will not:

struct obj {
        ...
        union {
                struct foo name1[];
                struct bar name2[];
        };
};
<source>:5:22: error: flexible array member in union
  5 | struct foo name1[];
    |            ^~~~~

But in both cases, the compiler treats name1 and name2 as flexible arrays. What will happily compile, though, is wrapping true flexible arrays in a struct that has at least 1 other non-true-flexible array, including an empty anonymous struct (i.e. taking up no size):

struct obj {
        ...
        union {
                struct {
                        struct { } __unused_member1;
                        struct foo name1[];
                };
                struct {
                        struct { } __unused_member2;
                        struct bar name2[];
                };
        };
};

Thankfully, this was wrapped in Linux with the DECLARE_FLEX_ARRAY() macro:

struct obj {
        ...
        union {
                DECLARE_FLEX_ARRAY(struct foo, name1);
                DECLARE_FLEX_ARRAY(struct bar, name2);
        };
};

which makes this much more readable. I hope to see future C standards eliminate this restriction.

Overlapping composite structure members

This is another case of a real bug being exposed by true flexible array usage, as it is possible to create an implicit union of a flexible array and something else by including a flexible array structure in another struct. For example:

struct inner {
        ...
        int flex[0];
};

struct outer {
        ...
        struct inner header;
        int overlap;
        ...
} *instance;

Here, instance->overlap and instance->header.flex[0] share the same memory location. Whether or not this is intentional cannot be understood by the compiler. If it is a bug, then using a true flexible array will trigger a warning. If it's not a bug, rearranging the structures to use an actual union is needed (see above).

struct definition parsed by something other than a C compiler

If the converted struct is part of a source file that is parsed by something that is not a C compiler, it may not be prepared to handle empty square braces on arrays. For example, SWIG broke when the Linux Userspace API headers got converted. This is a known issue in SWIG, and can be worked around in various ways.

Replace 1-element arrays

Most 1-element array conversions are similar to 0-length array conversions, but with the effect that the surrounding structure's sizeof() changes. This leads to a few additional significant issues:

Size calculations

If a struct is used entirely internally to Linux, it is generally sufficient to make changes to both the struct and all size calculations, which will result in identical binary output. For example:

struct object {
        ...
        int flex[1];
} *p;

p = kmalloc(sizeof(*p) + sizeof(p->flex[0]) * (count - 1)),
            GFP_KERNEL);

the above count - 1 becomes just count now:

struct object {
        ...
        int flex[];
} *p;

p = kmalloc(sizeof(*p) + sizeof(p->flex[0]) * count),
            GFP_KERNEL);

If all size calculations are correctly adjusted, there should be no differences in the resulting allocation size, etc. If a discrepancy is found, it is going to be either a bug introduced by the conversion, or the discovery of an existing bug in the original size calculations.

Note that depending on the sizes of the structure, its flexible array element, and count, there is also the risk associated with arithmetic overflow. Linux uses the struct_size() macro to perform these calculations so that the result saturates to at most SIZE_MAX, which will cause an allocation failure rather than wrapping around. So the best way to perform this allocation would be:

p = kmalloc(struct_size(p, flex, count), GFP_KERNEL);

Padding and interface sizes

When a structure definition is also used by a codebase we don't control (e.g. firmware, userspace, virtualization), changing its layout or sizeof() may break such code. Specifically, it may break its ability to communicate correctly with the kernel across the shared interface. Such structures cannot suddenly lose the single element of its trailing array. In these cases, a new member needs to be used for kernel code, explicitly keeping the original member for backward compatibility. For example:

struct object {
        ...
        int flex[1];
};

becomes:

struct object {
        ...
        union {
                int flex[1];
                DECLARE_FLEX_ARRAY(int, data);
        };
};

Now the kernel will only use the newly named data member (and gain any potential bounds checking protections from the compiler), and external code that shares this structure definition can continue to use the flex member, all without changing the size of the structure.

This has the downside of needing to change the member name throughout Linux. However, if the other side of the interface doesn't actually use the original member, we can avoid this. We can convert the member to a flexible array and add explicit padding instead. This would mean no collateral changes with the member name in Linux are needed:

struct object {
        ...
        union {
                int __padding;
                DECLARE_FLEX_ARRAY(int, flex);
        };
};

Replace multi-element arrays

In the cases of trailing arrays with larger element counts, the usage needs to be even more carefully studied. Most problems end up looking very similar to 1-element interface conversions above. For example, if there is some hardware interface that returns at least 4 bytes for an otherwise dynamically sized array, the conversion would start from here:

struct object {
        ...
        unsigned char data[4];
};

which becomes:

struct object {
        ...
        union {
                unsigned char __padding[4];
                DECLARE_FLEX_ARRAY(unsigned char, data);
        };
};

Enable -Warray-bounds

With all fixed-size array bounds able to be determined at build time, -Warray-bounds can actually perform the checking, keeping provably bad code out of Linux. (This option is already part of -Wall, which Linux isn't quite able to use itself yet, but is strongly recommended for other C projects.) As a reminder, optimization level will impact this option. The kernel is built with -O2, which is likely the right choice for most C projects.

Enable -Wzero-length-array

If all zero length arrays have been removed from the code, future uses can be kept out of the code by using -Wzero-length-array. This option is currently only available in Clang, and will warn when finding the definition of such structure members, rather than warning when they are accessed in code. Because of this, it is unlikely to ever be enabled in Linux since some array sizes are constructed from build configurations, and may drop to 0 when they are unused (i.e. they were never used as flexible arrays). As such, it is sufficient to use -fstrict-flex-arrays (see below) and -Warray-bounds.

Enable -fstrict-flex-arrays

Once all the fake flexible arrays have been converted to true flexible arrays, the remaining fixed-sized trailing arrays can start being treated as actually fixed-size by enabling -fstrict-flex-arrays. Future attempts to add fake flexible arrays to the code will then elicit warnings as part of the existing diagnostics from -Warray-bounds, since all fake flexible arrays are now treated as fixed-size arrays. (Note that this option sees the subset of 0-length arrays caught by -Wzero-length-array when they are actually used in the code, so -Wzero-length-array may be redundant.)

Coming soon: annotate bounds of flexible arrays

With flexible arrays now a first-class citizen in Linux and the compilers, it becomes possible to extend their available diagnostics. What the compiler is missing is knowledge of how the length of a given flexible array is tracked. For well-described flexible array structs, this means associating the member holding the element count with the flexible array member. This idea is not new, though prior implementation proposals have wanted to make changes to the C language syntax. A simpler approach is the addition of struct member attributes, and is under discussion and early development by both the GCC and Clang developer communities.

Add __attribute__((__counted_by__(member)))

In order to annotate flexible arrays, a new attribute could be used to describe the relationship between struct members. For example:

struct object {
        ...
        signed char items;
        ...
        int flex[];
} *p;

becomes:

struct object {
        ...
        signed char items;
        ...
        int flex[] __attribute__((__counted_by__(items)));
} *p;

This would allow -fsanitize=bounds to check for out-of-bounds accesses. For example, given the above annotation, each of the marked access into p->flex should trap:

sum += p->flex[-1];  // trap all negative indexes
sum += p->flex[128]; // trap when index larger than bounds type
sum += p->flex[0];   // trap when p->items <= 0
sum += p->flex[5];   // trap when p->items <= 5
sum += p->flex[idx]; // trap when p->items <= idx || idx < 0

The type associated with the bounds check (signed char in the example above) should perhaps be required to be an unsigned type, but Linux has so many counters implemented as int that it becomes an additional refactoring burden to change these to unsigned, especially since sometimes they are sneakily being used with negative values in some other part of the code. Better to leave them as-is (though perhaps emit a warning), and just add a negativity check at access time. Switching the counter to unsigned then potentially becomes a small performance improvement.

Similar to -fsanitize=bounds above, __builtin_dynamic_object_size() will perform the expected calculations with the items member as the basis for the resulting size (and where values less than 0 are considered to be 0 to avoid pathological calculations):

p->items = 5;
assert(__builtin_dynamic_object_size(p, 1) ==
        sizeof(*p) + 5 * sizeof(*p->flex));
assert(__builtin_dynamic_object_size(p->flex, 1) ==
        5 * sizeof(*p->flex));
assert(__builtin_dynamic_object_size(&p->flex[0], 1) ==
        sizeof(*p->flex));
assert(__builtin_dynamic_object_size(&p->flex[2], 0) ==
        3 * sizeof(*p->flex));

p->items = -10;
assert(__builtin_dynamic_object_size(p, 0) == sizeof(*p));
assert(__builtin_dynamic_object_size(p, 1) == sizeof(*p));
assert(__builtin_dynamic_object_size(p->flex, 1) == 0);
assert(__builtin_dynamic_object_size(&p->flex[2], 1) == 0);

Additional attributes may be needed if structures explicitly use byte counts rather than element counts.

Scope considerations

Composite structures need to be able to define __counted_by__ across struct boundaries:

struct object {
        ...
        char items;
        ...
        struct inner {
                ...
                int flex[] __attribute__((__counted_by__(.items)));
        };
} *ptr;

This may mean passing &ptr->inner to a function will lose the bounds knowledge, but it may be possible to automatically include a bounds argument as an invisible function argument, as any function able to understand the layout of struct inner must by definition have visibility into the definition of struct object. For example, with this:

struct object instance;
...
func(&instance.inner);
...
void func(struct inner *ptr) {
        ...
        ptr->flex[foo]; /* "items" is not scope */
        ...
}

The prototype could either be rejected due to lack of available scope, or could be automatically converted into passing the outer object pointer with an injected scope:

void func(struct object *__ptr) {
        struct inner *ptr = &__ptr->inner;
        ...
        ptr->flex[foo]; /* __ptr->items is in scope */
        ...
}

Annotate kernel flexible array structs

With the compiler attribute available, all of Linux's flexible arrays can be updated to include the annotation, and CONFIG_FORTIFY_SOURCE can be expanded to use __builtin_dynamic_object_size().

Replace DECLARE_FLEX_ARRAY with DECLARE_BOUNDED_ARRAY

Most uses of DECLARE_FLEX_ARRAY() can be replaced with DECLARE_BOUNDED_ARRAY(), explicitly naming the expected flex array bounds member. For example, if we had:

struct obj {
        ...
        int items;
        ...
        union {
                DECLARE_FLEX_ARRAY(struct foo, name1);
                DECLARE_FLEX_ARRAY(struct bar, name2);
        };
};

it would become:

struct obj {
        ...
        int items;
        ...
        union {
                DECLARE_BOUNDED_ARRAY(struct foo, name1, items);
                DECLARE_BOUNDED_ARRAY(struct bar, name2, items);
        };
};

Add manual annotations

Any flexible array structures not already using DECLARE_BOUNDED_ARRAY() can be annotated manually with the new attribute. For example, assuming the proposed __attribute__((__counted_by__(member))) is wrapped in a macro named __counted_by():

struct obj {
        ...
        int items;
        ...
        int flex[];
};

becomes:

struct obj {
        ...
        int items;
        ...
        int flex[] __counted_by(items);
};

Future work: expand attribute beyond arrays

It will also be possible to use the new attribute on pointers and function arguments as well as flexible arrays. All the same details are available, though there would be the obvious differences for enclosing structure sizes, as the pointers are aimed (usually) outside the struct itself. Regardless, having it be possible to check offsets and inform __builtin_dynamic_object_size() would allow for several more places where runtime checking could be possible. For example, given this:

struct object {
        ...
        unsigned char items;
        ...
        int *data __attribute__((__counted_by__(items)));
        ...
} *p;

It should be possible to detect sizing information:

p->items = 5;
assert(__builtin_dynamic_object_size(p->data, 1) ==
        5 * sizeof(*p->data));
assert(__builtin_dynamic_object_size(*p->data, 1) ==
        sizeof(*p->data));
assert(__builtin_dynamic_object_size(*p->data, 0) ==
        5 * sizeof(*p->data));

And it should be possible to trap on the following bad accesses:

int *ptr = p->data;
sum += ptr[-1];  // trap all negative indexes
sum += ptr[500]; // trap when index larger than bounds type
sum += ptr[0];   // trap when p->items <= 0
sum += ptr[5];   // trap when p->items <= 5
ptr += 5;        // don't trap yet: allow ptr++ in a for loop
sum += *ptr;     // trap when p->items <= 5

A safer code base

A C codebase that has refactored all of its arrays into proper flexible arrays can now finally build by using:

        -Warray-bounds
        -fstrict-flex-arrays
        -fsanitize=bounds
        -fsanitize-undefined-trap-on-error
        -D_FORTIFY_SOURCE=3

With this, the burdens of C array index bounds checking will have been shifted to the toolchain, and array index overflow flaw exploitation can be a thing of the past, reducing severity to a simple denial of service (assuming the traps aren't handled gracefully). For the next trick, new code can be written in a language that is memory safe to start with (e.g. Rust).

Acknowledgements

Thanks to many people who gave me feedback on this post: Nick Desaulniers, Gustavo A. R. Silva, Bill Wendling, Qing Zhao, Kara Olive, Chris Palmer, Steven Rostedt, Allen Webb, Julien Voisin, Guenter Roeck, Evan Benn, Seth Jenkins, Alexander Potapenko, Ricardo Ribalda, and Kevin Chowski.

Discussion

Please join this thread with your thoughts, comments, and corrections. :)

 
Read more...

from mcgrof

kdevops logo

After 3 years since the announcement of the first release of kdevops I'd like to announce the release of v6.2-rc1 of kdevops!

kdevops is designed to help with automation of Linux kernel development workflows. At first, is was not clear how and if kdevops could be used outside of filesystems testing easily. In fact my last post about it 3 years ago explained how one could only use kdevops in an odd way for other things, one had to fork it to use it for different workflows. That's old nonsense now. kdevops has grown to adopt kconfig and so in one single tree different workflows are now possible. Embracing other things such as using jinja2 for file templating with ansible and having to figure out a way to add PCI-E passthrough support through kconfig has made me realize that the growth component to the project is no longer a concern, it is actually a feature now. It is clear now that new technologies and very complex workflows can easily be added to kdevops.

But it is easy to say unless you have proof, and fortunately I have it. There are two new technologies that go well supported in kdevops that folks who are curious can start mucking around with, which otherwise may take a bit of time to ramp up with. The technologies are: Zoned storage and CXL. Supporting new technologies also means ensuring you get whatever tooling you might need to want to test or work with such technologies.

So for instance, getting a full Linux kernel development workflow going for CXL with the meson unit tests, even by enabling PCI-E passthrough, with the latest linux-next kernel is now reduced to just a few basic commands, in a Linux distribution / cloud provider agnostic manner:

make dynconfig
make
make bringup
make linux
make cxl
make cxl-test-meson

Just ask around a typical CXL Linux kernel developer how long it took them to get a CXL Linux kernel development & test environment up and running that they were happy with. And ask if it was reproducible. This is all now just reduced to 6 commands.

As for the details, it has been 8 months since the last release, and over that time the project has received 680 commits. I'd like to thank the developers who contributed:

Adam Manzanares
Amir Goldstein
Chandan Babu R
Jeff Layton
Joel Granados
Josef Bacik
Luis Chamberlain
Pankaj Raghav

I'd also like to thank my employer for trusting in this work, and allowing me to share a big iron server to help the community with Linux kernel stable work and general kernel technology enablement.

As for the exact details of changes merged, there so many! So I've tried to provide a nice terse summary on highlights on the git tag for v6.2-rc1. 8 months was certainly a long time to wait for a new release, so my hope is we'll try to bake a release now in tandem with the Linux kernel, in cadence with the same Linux kernel versioning and release timeline.

Based on feedback at LSFMM this year the project is now under the github linux-kdevops organization. This enables other developers to push into the tree. This let's us scale, specially as different workflows are supported.

If you see value in enabling rapid ramp up with Linux kernel development through kdevops for your subsystem / technology / feel free to join the party and either send a pull request to the group or just send patches.

 
Read more...

from Konstantin Ryabitsev

While b4 started out as a way for maintainers to retrieve patches from mailing lists, it also has contributor-oriented features. Starting with version 0.10 b4 can:

  • create and manage patch series and cover letters
  • track and auto-reroll series revisions
  • display range-diffs between revisions
  • apply trailers received from reviewers and maintainers
  • submit patches without needing a valid SMTP gateway

These features are still considered experimental, but they should be stable for most work and I'd be happy to receive further feedback from occasional contributors.

In this article, we'll go through the process of submitting an actual typo fix patch to the upstream kernel. This bug was identified a few years ago and submitted via bugzilla, but never fixed:

Accompanying video

This article has an accompanying video where I go through all the steps and submit the actual patch at the end:

Installing the latest b4 version

Start by installing b4. The easiest is to do it via pip, as this would grab the latest stable version:

$ pip install --user b4
[...]
$ b4 --version
0.11.1

If you get an error or an older version of b4, please check that your $PATH contains $HOME/.local/bin where pip installs the binaries.

Preparing the tree

  • b4 prep -n [name-of-branch] -f [nearest-tag]

Next, prepare a topical branch where you will be doing your work. We'll be fixing a typo in arch/arm/boot/dts/aspeed-bmc-opp-lanyang.dts, and we'll base this work on tag v6.1:

$ b4 prep -n lanyang-dts-typo -f v6.1
Created new branch b4/lanyang-dts-typo
Created the default cover letter, you can edit with --edit-cover.

This is just a regular branch prepended with “b4/”:

$ git branch
* b4/lanyang-dts-typo
  master

You can do all the normal operations with it, and the only special thing about it is that it has an “empty commit” at the start of the series containing the template of our cover letter.

Editing the cover letter

  • b4 prep --edit-cover

If you plan to submit a single patch, then the cover letter is not that necessary and will only be used to track the destination addresses and changelog entries. You can delete most of the template content and leave just the title and sign-off. The tracking information json will always be appended to the end automatically — you don't need to worry about it.

Here's what the commit looks like after I edited it:

$ git cat-file -p HEAD
tree c7c1b7db9ced3eba518cfc1f711e9d89f73f8667
parent 830b3c68c1fb1e9176028d02ef86f3cf76aa2476
author Konstantin Ryabitsev <icon@mricon.com> 1671656701 -0500
committer Konstantin Ryabitsev <icon@mricon.com> 1671656701 -0500

Simple typo fix for the lanyang dts

Signed-off-by: Konstantin Ryabitsev <icon@mricon.com>

--- b4-submit-tracking ---
# This section is used internally by b4 prep for tracking purposes.
{
  "series": {
    "revision": 1,
    "change-id": "20221221-lanyang-dts-typo-8509e8ffccd4",
    "base-branch": "master",
    "prefixes": []
  }
}

Committing your work

You can add commits to this branch as you normally would with any other git work. I am going to fix two obvious typos in a single file and make a single commit:

$ git show HEAD
commit 820ce2d9bc7c88e1515642cf3fc4005a52e4c490 (HEAD -> b4/lanyang-dts-typo)
Author: Konstantin Ryabitsev <icon@mricon.com>
Date:   Wed Dec 21 16:17:21 2022 -0500

    arm: lanyang: fix lable->label typo for lanyang dts

    Fix an obvious spelling error in the dts file for Lanyang BMC.
    This was reported via bugzilla a few years ago but never fixed.

    Reported-by: Jens Schleusener <Jens.Schleusener@fossies.org>
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=205891
    Signed-off-by: Konstantin Ryabitsev <icon@mricon.com>

diff --git a/arch/arm/boot/dts/aspeed-bmc-opp-lanyang.dts b/arch/arm/boot/dts/aspeed-bmc-opp-lanyang.dts
index c0847636f20b..e72e8ef5bff2 100644
--- a/arch/arm/boot/dts/aspeed-bmc-opp-lanyang.dts
+++ b/arch/arm/boot/dts/aspeed-bmc-opp-lanyang.dts
@@ -52,12 +52,12 @@ hdd_fault {
                        gpios = <&gpio ASPEED_GPIO(B, 3) GPIO_ACTIVE_HIGH>;
                };
                bmc_err {
-                       lable = "BMC_fault";
+                       label = "BMC_fault";
                        gpios = <&gpio ASPEED_GPIO(H, 6) GPIO_ACTIVE_HIGH>;
                };

                sys_err {
-                       lable = "Sys_fault";
+                       label = "Sys_fault";
                        gpios = <&gpio ASPEED_GPIO(H, 7) GPIO_ACTIVE_HIGH>;
                };
        };

Collecting To: and Cc: addresses

  • b4 prep --auto-to-cc

After you've committed your work, you will want to collect the addresses of people who should be the ones reviewing it. Running b4 prep --auto-to-cc will invoke scripts/get_maintainer.pl with the default recommended flags to find out who should go into the To: and Cc: headers:

$ b4 prep --auto-to-cc
Will collect To: addresses using get_maintainer.pl
Will collect Cc: addresses using get_maintainer.pl
Collecting To/Cc addresses
    + To: Rob Herring <...>
    + To: Krzysztof Kozlowski <...>
    + To: Joel Stanley <...>
    + To: Andrew Jeffery <...>
    + Cc: devicetree@vger.kernel.org
    + Cc: linux-arm-kernel@lists.infradead.org
    + Cc: linux-aspeed@lists.ozlabs.org
    + Cc: linux-kernel@vger.kernel.org
    + Cc: Jens Schleusener <...>
---
You can trim/expand this list with: b4 prep --edit-cover
Invoking git-filter-repo to update the cover letter.
New history written in 0.06 seconds...
Completely finished after 0.33 seconds.

These addresses will be added to the cover letter and you can edit them to add/remove destinations using the usual b4 prep --edit-cover command.

Creating your patatt keypair for web endpoint submission

(This needs to be done only once.)

  • patatt genkey

Note: if you already have a PGP key and it's set as user.signingKey, then you can skip this section entirely.

Before we submit the patch, let's set up the keypair to sign our contributions. This is not strictly necessary if you are going to be using your own SMTP server to submit the patches, but it's a required step if you will use the kernel.org patch submission endpoint (which is what b4 will use in the absence of any [sendemail] sections in your git config).

The process is very simple. Run patatt genkey and add the resulting [patatt] section to your ~/.gitconfig as instructed by the output.

NOTE: You will want to back up the contents of your ~/.local/share/patatt so you don't lose access to your private key.

Dry-run and checkpatch

  • b4 send -o /tmp/tosend
  • ./scripts/checkpatch.pl /tmp/tosend/*

Next, generate the patches and look at their contents to make sure that everything is looking sane. Good things to check are:

  • the From: address
  • the To: and Cc: addresses
  • general patch formatting
  • cover letter formatting (if more than 1 patch in the series)

If everything looks sane, one more recommended step is to run checkpatch.pl from the top of the kernel tree:

$ ./scripts/checkpatch.pl /tmp/tosend/*
total: 0 errors, 0 warnings, 14 lines checked

/tmp/tosend/0001-arm-lanyang-fix-lable-label-typo-for-lanyang-dts.eml has no obvious style problems and is ready for submission.

Register your key with the web submission endpoint

(This needs to be done only once, unless you change your keys.)

  • b4 send --web-auth-new
  • b4 send --web-auth-verify [challenge]

If you're not going to use your own SMTP server to send the patch, you should register your new keypair with the endpoint:

$ b4 send --web-auth-new
Will submit a new email authorization request to:
  Endpoint: https://lkml.kernel.org/_b4_submit
      Name: Konstantin Ryabitsev
  Identity: icon@mricon.com
  Selector: 20221221
    Pubkey: ed25519:24L8+ejW6PwbTbrJ/uT8HmSM8XkvGGtjTZ6NftSSI6I=
---
Press Enter to confirm or Ctrl-C to abort
Submitting new auth request to https://lkml.kernel.org/_b4_submit
---
Challenge generated and sent to icon@mricon.com
Once you receive it, run b4 send --web-auth-verify [challenge-string]

The challenge is a UUID4 string and this step is a simple verification that you are able to receive email at the address you want associated with this key. Once you receive the challenge, complete the process as described:

$ b4 send --web-auth-verify 897851db-9b84-4117-9d82-1d970f9df5f8
Signing challenge
Submitting verification to https://lkml.kernel.org/_b4_submit
---
Challenge successfully verified for icon@mricon.com
You may now use this endpoint for submitting patches.

OR, set up your [sendemail] section

You don't have to use the web endpoint — it exists primarily for people who are not able or not willing to set up their SMTP information with git. Setting up a SMTP gateway is not a straightforward process for many:

  • platforms using OAuth require setting up “application-specific passwords”
  • some companies only provide Exchange or browser-based access to email and don't offer any other way to send mail
  • some company SMTP gateways rewrite messages to add lengthy disclaimers or rewrite links to quarantine them

However, if you have access to a functional SMTP gateway, then you are encouraged to use it instead of submitting via the web endpoint, as this ensures that the development process remains distributed and not dependent on any central services. Just follow instructions in man git-send-email and add a valid [sendemail] section to your git config. If b4 finds it, it will use it instead of relying on the web endpoint.

[sendemail]
    smtpEncryption = tls
    smtpServer = smtp.gmail.com
    smtpServerPort = 465
    smtpEncryption = ssl
    smtpUser = yourname@gmail.com
    smtpPass = your-gmail-app-password

Reflect the email to yourself

  • b4 send --reflect

This is the last step to use before sending off your contribution. Note, that it will fill out the To: and Cc: headers of all messages with actual recipients, but it will NOT actually send mail to them, just to yourself. Mail servers don't actually pay any attention to those headers — the only thing that matters to them is what was specified in the RCPT TO outer envelope of the negotiation.

This step is particularly useful if you're going to send your patches via the web endpoint. Unless your email address is from one of the following domains, the From: header will be rewritten in order to not violate DMARC policies:

  • @kernel.org
  • @linuxfoundation.org
  • @linux.dev

If your email domain doesn't match the above, the From: header will be rewritten to be a kernel.org dummy address. Your actual From: will be added to the body of the message where git expects to find it, and the Reply-To: header will be set so anyone replying to your message will be sending it to the right place.

Send it off!

  • b4 send

If all your tests are looking good, then you are ready to send your work. Fire off “b4 send”, review the “Ready to:” section for one final check and either Ctrl-C to get out of it, or hit Enter to submit your work upstream.

Coming up next

In the next post, I will go over:

  • making changes to your patches using: git rebase -i
  • retrieving and applying follow-up trailers using: b4 trailers -u
  • comparing v2 and v1 to see what changes you made using: b4 prep --compare-to v1
  • adding changelog entries using: b4 prep --edit-cover

Documentation

All contributor-oriented features of b4 are documented on the following site:

 
Read more...

from joelfernandes

Below are some notes I wrote while studying hrtimer slack behavior (range timers), which was added to reduce wakeups and save power, in the commit below. The idea is that: 1. Normal hrtimers will have both a soft and hard expiry which are equal to each other. 2. But hrtimers with timer slack will have a soft expiry and a hard expiry which is the soft expiry + delta.

The slack/delay effect is achieved by splitting the execution of the timer function, and the programming of the next timer event into 2 separate steps. That is, we execute the timer function as soon as we notice that its soft expiry has passed (hrtimer_run_queues()). However, for programming the next timer interrupt, we only look at the hard expiry (hrtimer_update_next_event() –> __hrtimer_get_next_event() –> __hrtimer_next_event_base()–>hrtimer_get_expires()). As a result, the only way a slack-based timer will execute before its slack time elapses, is, if another timer without any slack time gets queued such that it hard-expires before the slack time of the slack-based timer passes.

The commit containing the original code added for range timers is:

commit 654c8e0b1c623b156c5b92f28d914ab38c9c2c90
Author: Arjan van de Ven <arjan@linux.intel.com>
Date:   Mon Sep 1 15:47:08 2008 -0700

    hrtimer: turn hrtimers into range timers
   
    this patch turns hrtimers into range timers;
    they have 2 expire points
    1) the soft expire point
    2) the hard expire point
   
    the kernel will do it's regular best effort attempt to get the timer run at the hard expire point. However, if some other time fires after the soft expire point, the kernel now has the freedom to fire this timer at this point, and thus grouping the events and preventing a power-expensive wakeup in the future.

The original code seems a bit buggy. I got a bit confused about how/where we handle the case in hrtimer_interrupt() where other normal timers that expire before the slack time elapses, have their next timer interrupt programmed correctly such that the interrupt goes off before the slack time passes.

To see the issue, consider the case where we have 2 timers queued:

  1. The first one soft expires at t = 10, and say it has a slack of 50, so it hard expires at t = 60.

  2. The second one is a normal timer, so the soft/hard expiry of it is both at t = 30.

Now say, an hrtimer interrupt happens at t=5 courtesy of an unrelated expiring timer. In the below code, we notice that the next expiring timer is (the one with slack one), which has not soft-expired yet. So we have no reason to run it. However, we reprogram the next timer interrupt to be t=60 which is its hard expiry time (this is stored in expires_next to use as the value to program the next timer interrupt with).  Now we have a big problem, because the timer expiring at t=30 will not run in time and run much later.

As shown below, the loop in hrtimer_interrupt() goes through all the active timers in the timerqueue, _softexpires is made to be the real expiry, and the old _expires now becomes _softexpires + slack.

       while((node = timerqueue_getnext(&base->active))) {
              struct hrtimer *timer;

              timer = container_of(node, struct hrtimer, node);

              /*
               * The immediate goal for using the softexpires is
               * minimizing wakeups, not running timers at the
               * earliest interrupt after their soft expiration.
               * This allows us to avoid using a Priority Search
               * Tree, which can answer a stabbing querry for
               * overlapping intervals and instead use the simple
               * BST we already have.
               * We don't add extra wakeups by delaying timers that
               * are right-of a not yet expired timer, because that
               * timer will have to trigger a wakeup anyway.
               */

              if (basenow.tv64 < hrtimer_get_softexpires_tv64(timer)) {
                      ktime_t expires;

                      expires = ktime_sub(hrtimer_get_expires(timer),
                                          base->offset);
                      if (expires.tv64 < expires_next.tv64)
                              expires_next = expires;
                      break;
              }

              __run_hrtimer(timer, &basenow);
      }

However, this seems to be an old kernel issue, as, in upstream v6.0, I believe the next hrtimer interrupt will be programmed correctly because __hrtimer_next_event_base() calls hrtimer_get_expires() which correctly use the “hard expiry” times to do the programming.

As of v6.2, the __hrtimer_run_queues() function looks like this:

static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now,
				 unsigned long flags, unsigned int active_mask)
{
	struct hrtimer_clock_base *base;
	unsigned int active = cpu_base->active_bases & active_mask;

	for_each_active_base(base, cpu_base, active) {
		struct timerqueue_node *node;
		ktime_t basenow;

		basenow = ktime_add(now, base->offset);

		while ((node = timerqueue_getnext(&base->active))) {
			struct hrtimer *timer;

			timer = container_of(node, struct hrtimer, node);

			/*
			 * The immediate goal for using the softexpires is
			 * minimizing wakeups, not running timers at the
			 * earliest interrupt after their soft expiration.
			 * This allows us to avoid using a Priority Search
			 * Tree, which can answer a stabbing query for
			 * overlapping intervals and instead use the simple
			 * BST we already have.
			 * We don't add extra wakeups by delaying timers that
			 * are right-of a not yet expired timer, because that
			 * timer will have to trigger a wakeup anyway.
			 */
			if (basenow < hrtimer_get_softexpires_tv64(timer))
				break;

			__run_hrtimer(cpu_base, base, timer, &basenow, flags);
			if (active_mask == HRTIMER_ACTIVE_SOFT)
				hrtimer_sync_wait_running(cpu_base, flags);
		}
	}
}

The utilization of hrtimer_get_softexpires_tv64() might be perplexing, as it may raise the question of how this loop expires non-slack timers that possess only a hard expiry time. To clarify, it's important to note that what was once referred to as expiry is now considered soft expiry for non-slack timers. Consequently, the condition basenow < hrtimer_get_softexpires_tv64(timer) is capable of expiring both slack and non-slack timers effectively.

 
Read more...

from Jakub Kicinski

Kernel TLS implements the record encapsulation and cryptography of the TLS protocol. There are four areas where implementing (a portion of) TLS in the kernel helps:

  • enabling seamless acceleration (NIC or crypto accelerator offload)
  • enabling sendfile on encrypted connections
  • saving extra data copies (data can be encrypted as it is copied into the kernel)
  • enabling the use of TLS on kernel sockets (nbd, NFS etc.)

Kernel TLS handles only data records turning them into a cleartext data stream, all the control records (TLS handshake etc.) get sent to the application via a side channel for user space (OpenSSL or such) to process. The first implementation of kTLS was designed in the good old days of TLS 1.2. When TLS 1.3 came into the picture the interest in kTLS had slightly diminished and the implementation, although functional, was rather simple and did not retain all the benefits. This post covers developments in the Linux 5.20 implementation of TLS which claws back the performance lost moving to TLS 1.3. One of the features we lost in TLS 1.3 was the ability to decrypt data as it was copied into the user buffer during read. TLS 1.3 hides the true type of the record. Recall that kTLS wants to punt control records to a different path than data records. TLS 1.3 always populates the TLS header with application_data as the record type and the real record type is appended at the end, before record padding. This means that the data has to be decrypted for the true record type to be known.

Problem 1 – CoW on big GRO segments is inefficient

kTLS was made to dutifully decrypt the TLS 1.3 records first before copying the data to user space. Modern CPUs are relatively good at copying data, so the copy is not a huge problem in itself. What’s more problematic is how the kTLS code went about performing the copy. The data queued on TCP sockets is considered read-only by the kernel. The pages data sits in may have been zero-copy-sent and for example belong to a file. kTLS tried to decrypt “in place” because it didn’t know how to deal with separate input/output skbs. To decrypt “in place” it calls skb_cow_data(). As the name suggests this function makes a copy of the memory underlying an skb, to make it safe for writing. This function, however, is intended to be run on MTU-sized skbs (individual IP packets), not skbs from the TCP receive queue. The skbs from the receive queue can be much larger than a single TLS record (16kB). As a result TLS would CoW a 64kB skb 4 times to extract the 4 records inside it. Even worse if we consider that the last record will likely straddle skbs so we need to CoW two 64kB skbs to decrypt it “in place”. The diagram below visualizes the problem and the solution. SKB CoW The possible solutions are quite obvious – either create a custom version of skb_cow_data() or teach TLS to deal with different input and output skbs. I opted for the latter (due to further optimizations it enables). Now we use a fresh buffer for the decrypted data and there is no need to CoW the big skbs TCP produces. This fix alone results in ~25-45% performance improvement (depending on the exact CPU SKU and available memory bandwidth). A jump in performance from abysmal to comparable with the user space OpenSSL.

Problem 2 – direct decrypt

Removing pointless copies is all well and good, but as mentioned we also lost the ability to decrypt directly to the user space buffer. We still need to copy the data to user space after it has been decrypted (A in the diagram below, here showing just a single record not full skb). SKB direct decrypt We can’t regain the full efficiency of TLS 1.2 because we don’t know the record type upfront. In practice, however, most of the records are data/application records (records carrying the application data rather than TLS control traffic like handshake messages or keys), so we can optimize for that case. We can optimistically decrypt to the user buffer, hoping the record contains data, and then check if we were right. Since decrypt to a user space buffer does not destroy the original encrypted record if we turn out to be wrong we can decrypting again, this time to a kernel skb (which we can then direct to the control message queue). Obviously this sort of optimization would not be acceptable in the Internet wilderness, as attackers could force us to waste time decrypting all records twice. The real record type in TLS 1.3 is at the tail of the data. We must either trust that the application will not overwrite the record type after we place it in its buffer (B in the diagram below), or assume there will be no padding and use a kernel address as the destination of that chunk of data (C). Since record padding is also rare – I chose option (C). It improves the single stream performance by around 10%.

Problem 3 – latency

Applications tests have also showed that kTLS performs much worse than user space TLS in terms of the p99 RPC response latency. This is due to the fact that kTLS holds the socket lock for very long periods of time, preventing TCP from processing incoming packets. Inserting periodic TCP processing points into the kTLS code fixes the problem. The following graph shows the relationship between the TCP processing frequency (on the x axis in kB of consumed data, 0 = inf), throughput of a single TLS flow (“data”) and TCP socket state. TCP CWND SWND The TCP-perceived RTT of the connection grows the longer TLS hogs the socket lock without letting TCP process the ingress backlog. TCP responds by growing the congestion window. Delaying the TCP processing will prevent TCP from responding to network congestion effectively, therefore I decided to be conservative and use 128kB as the TCP processing threshold. Processing the incoming packets has the additional benefit of TLS being able to consume the data as it comes in from the NIC. Previously TLS had access to the data already processed by TCP when the read operation began. Any packets coming in from the NIC while TLS was decrypting would be backlogged at TCP input. On the way to user space TLS would release the socket lock, allowing the TCP backlog processing to kick in. TCP processing would schedule a TLS worker. TLS worker would tell the application there is more data.

 
Read more...

from linusw

We are discussing and working toward adding the language Rust as a second implementation language in the Linux kernel. A year ago Jake Edge made an excellent summary of the discussions so far on Rust for the Linux kernel and we (or rather Miguel and Wedson) have made further progress since then. For the record I think this is overall a good idea and worth a try. I wanted to add some background that was sketched in a mail thread for the kernel summit.

TL;DR: my claim is that Rust is attempting to raise the abstraction in the programming language and ultimately to join computer science and software engineering into one single discipline, an ambition that has been around since these disciplines were created.

Beginning with ALGOL

The first general high-level language was FORTRAN, which is still in use for some numerical analysis tasks around the world. Then came ALGOL, which attracted a wider audience.

The first “real” operating system (using virtual memory etc) for the Atlas Machine supervisor in 1962 was as far as I can tell implemented in Atlas autocode which was a dialect of ALGOL, which was the lingua franca at the time. Pure ALGOL could not be used because ALGOL 60 had no input/output primitives, so every real-world application of ALGOL, i.e. any application not solely relying on compiled-in constants, required custom I/O additions.

Algol specifications Copies of the first specifications of ALGOL 60, belonging at one time to Carl-Erik Fröberg at Lund University.

ALGOL inspired CPL that inspired BCPL that inspired the B programming language that inspired the C programming language, which we use for the Linux kernel.

Between 1958 and 1968 ALGOL was the nexus in a wide attempt to join computer languages with formal logic. In this timespan we saw the ALGOL 58, ALGOL 60 and ALGOL 68 revisions come out. The outcome was that it established computer science as a discipline and people could start building their academic careers on that topic. One notable outcome was the BNF form for describing syntax in languages. This time was in many ways formative for computer science: the first three volumes of Donald Knuths The Art of Computer Programming were published in close proximity to these events.

To realize that ALGOL was popular and widespread at the time that Unix was born, and that C was in no way universally accepted, it would suffice to read a piece of the original Bourne Shell source code tree for example:

setlist(arg,xp)
	REG ARGPTR	arg;
	INT		xp;
{
	WHILE arg
	DO REG STRING	s=mactrim(arg->argval);
	   setname(s, xp);
	   arg=arg->argnxt;
	   IF flags&execpr
	   THEN prs(s);
		IF arg THEN blank(); ELSE newline(); FI
	   FI
	OD
}

This doesn't look much like C as we know it, it looks much more like ALGOL 68. The ALGOL 68 definition added constructions such as IF/FI, DO/OD etc, which were not present in ALGOL 60. The reason is that Stephen Bourne was an influential contributor to ALGOL 68 and created a set of macros so that the C preprocessor would turn his custom dialect of ALGOL into C, for which I think someone on Reddit suggested to nominate bash for the obfuscated C contest.

This is just one of the instances where we can see that the C programming language was not universally loved. The Bourne Shell scripting language that we all love and use is also quite close to ALGOL 68, so the descendants of this language is used more than we may think.

Around 1970 Niklaus Wirth was working to improve ALGOL68 with what he called ALGOL W. Tired of the slowness of the language committee process he forked ALGOL and created the programming language Pascal which was a success in its own right. In his very interesting IEEE article named A Brief History of Software Engineering Professor Wirth gives his perspective on some of the events around that time: first he writes about the very influential NATO conference on software engineering 1968 in Garmisch, Germany which served to define software engineering as a distinct discipline. To counter the so-called software crisis – the problems presented by emerging large complex systems – the suggestion was to raise the abstraction in new languages.

To raise the abstraction means to use more mathematical, machine independent constructs in the language. First consider the difference between low-level and high-level languages: a simple operation such as x = x + 1 is not high level, and just a fancy assembly instruction; if we compile it we can readily observe the resulting code in some kind of ADD instruction in the resulting object code. However a[i] = x + 1 raises abstraction past the point of high-level languages. This is because indexing into an array requires knowledge of the target machine specifics: base addresses, memory layout, etc. This makes the instruction more high-level and thus raises the abstraction of the language. The assumption is that several further higher levels of abstraction exist. We will look into some of these languages in the following sections.

The Garmisch conference is famous in Unix circles because Douglas McIlroy was present and presented his idea of componentized software as a remedy against rising complexity, an idea that was later realized in the form of Unix's pipes and filters mechanism. D-Bus and similar component interoperation mechanisms are contemporary examples of such software componentry — another way to counter complexity and make software less fragile, but not the focus in this article.

Wirth makes one very specific and very important observation about the Garmisch conference:

Ultimately, analytic verification and correctness proofs were supposed to replace testing.

This means exactly what it says: with formally verified programming languages, all the features and constructs that are formally proven need not be tested for. Software engineering is known for advocating test-driven development (TDD) to this day, and the ambition was to make large chunks of TDD completely unnecessary. Software testing has its own chapter in the mentioned report from the Garmisch NATO conference where the authors A.I. Llewelyn and R.F. Wickens conclude:

There are, fundamentally, two different methods of determining whether a product meets its specification. One can analyse the product in great detail and from this determine if it is in accordance with its specification, or one can measure its performance experimentally and see if the results are in accord with the specification; the number and sophistication of the experiments can be varied to provide the degree of confidence required of the results.

The first part of this paragraph i.e. “analyze in great detail” is what Wirth calls analytic verification and is today called formal verification. The latter part of this paragraph is what we call test-driven development, TDD. Also: the former is a matter of computer science, while the latter is a matter of software engineering. So here is a fork in the road.

Wirth also claims the discussions in Garmisch had a distinct influence on Pascal. This can be easily spotted in Pascal strings, which was one of his principal improvements over ALGOL: Pascal strings are arrays of char, but unlike C char, a Pascal char is not the same as a byte; instead it is defined as belonging to an “ordered character set”, which can very well be ISO8859-1 or Unicode, less, more or equal to 255 characters in size. Strings stored in memory begin with an positive integer array length which defines how long the string is, but this is none of the programmer's business, this shall be altered by the language runtime and not by any custom code. Indexing out of bounds is therefore not possible and can be trivially prohibited during compilation and at runtime. This raises the abstraction of strings: they are set-entities, they have clear boundaries, they need special support code to handle the length field in memory. Further Pascal also has set types, such as:

var
    JanuaryDays : set of 1..31;

Perhaps Pascal's application to real-world problems didn't work out as expected, as it has since also defined PChar as a NULL-terminated pointer to a sequence of characters, akin to C strings. However it should be noted that Pascal pointers are persistently typed and cannot be converted: casting is not possible in Pascal. A Pascal pointer to an integer is always a pointer to an integer.

From Wirth's perspective, C “presented a great leap backward” and he claims “it revealed that the community at large had hardly grasped the true meaning of the term 'high-level language' which became an ill-understood buzzword”. He attributes the problem to Unix which he says “acted like a Trojan horse for C”. He further details the actual technical problems with C:

C offers abstractions which it does not in fact support: Arrays remain without index checking, data types without consistency check, pointers are merely addresses where addition and subtraction are applicable. One might have classified C as being somewhere between misleading and even dangerous.

His point about C lacking index checking is especially important: it can be brought into question if C is really a high-level language. It is not fully abstracting away the machine specifics of handling an array. Language theorists can occasionally refer to C as a “big macro assembler”, the only thing abstracted away is really the raw instruction set.

Wirth however also goes on to state the appealing aspects of the C programming language:

people at large, particularly in academia, found it intriguing and “better than assembly code” (...) its rules could easily be broken, exactly what many programmers cherished. It was possible to manage access to all of a computer’s idiosyncracies, to items that a high-level language would properly hide. C provided freedom, where high-level languages were considered as straight-jackets enforcing unwanted discipline. It was an invitation to use tricks which had been necessary to achieve efficiency in the early days of computers.

We can see why an efficiency-oriented operating system kernel such as Linux will tend toward C.

It's not like these tricks stopped after the early days of computing. Just the other day I wrote a patch for Linux with two similar code paths, which could be eliminated by cast:ing a (const void *) into a (void *) which I then quipped about in the commit message of the revised patch. The reason for violating formal rules in this case — is that of a choice between two evils, and chosing the lesser evil: in a choice between formal correctness and code reuse I chose code reuse. And C enables that kind of choice. The languages presented later in this article absolutely do not allow that kind of choice, and C casts are seen as nothing less than an abomination.

The language family including C and also Pascal is referred to as imperative programming languages. The defining character is that the programmer “thinks like a computer” or imagine themselves as the program counter to be exact. “First I do this, next I do this, then I do this” – a sequence of statements executed in order, keeping the computer state (such as registers, memory locations and stacks) in the back of your head.

The immediate appeal to operating system programmers should be evident: this closely models what an OS developer needs to keep in mind, such as registers, stacks, cache frames, MMU tables, state transitions in hardware and so on. It is possible to see the whole family of imperative languages as domain specific languages for the domain of writing operating systems, so it would be for operating system developers what OpenGL is for computer graphics software developers.

Lambda Calculus for Defining Languages

In 1966 one of the early adopters and contributors to ALGOL (alongside Peter Naur, Tony Hoare and Niklaus Wirth), Peter Landin, published two articles in the Journal of the ACM titled Correspondence between ALGOL 60 and Church's Lambda-notation part I and part II. In the first article he begins with a good portion of dry humour:

Anyone familiar with both Church's λ-calculi and ALGOL 60 will have noticed a superficial resemblance between the way variables tie up with the λ's in a nest of λ-expressions, and the way that identifiers tie up with the headings in a nest of procedures and blocks.

He is of course aware that no-one beside himself had been in the position to realize this: the overlap between people familiar with Alonzo Church's λ-calculus and with ALGOL 60 was surprisingly down to one person on the planet. What is surprising is that it was even one person.

Alonzo Church was a scholar of mathematical logic and computability, the supervisor of Alan Turing's doctoral thesis and active in the same field as Kurt Gödel (those men quoted each other in their respective articles). The lambda calculus ties into the type set theory created by Bertrand Russell and the logical-mathematical programme, another universe of history we will not discuss here.

What λ-calculus (Lambda-calculus) does for a programming language definition is analogous to what regular expressions does for a languages syntax, but for it's semantics. While regular expressions can express how to parse a body of text in a language with regular grammar, expressions in λ-calculus can go on from the abstract syntax tree and express what an addition is, what a subtraction is, or what a bitwise OR is. This exercise is seldomly done in e.g. compiler construction courses, but defining semantics is an inherent part of a programming language definition.

Perhaps the most remembered part of Landin's papers is his humorous term syntactic sugar which denotes things added to a language to make the life of the programmer easier, but which has no semantic content that cannot be expressed by the basic features of the language. The basic mathematical features of the language, on the other hand, are best expressed with λ-calculus.

A notable invention in Landin's first article about defining ALGOL in terms of λ-calculus are the keywords let and where chosen to correspond to λ-calculus' Applicable Expressions. These keywords do not exist in ALGOL: they are part of a language to talk about a language, or in more complicated terms: a meta-language. So here we see the first steps toward a new language derived from λ-calculus. Landin does not give this language a name in this article, but just refers to it as “AE”. The AE executes in a theoretical machine called SECD, which is another trick of the trade, like Alan Turings “turing machine”: rather close to a mathematicians statement “let's assume we have...” The complete framework for defining ALGOL in λ-calculus is called AE/SECD.

Functional Programming

Functional programming languages then, implements lambda calculus. The central idea after some years of experience with defining languages such as ALGOL in terms of lambda calculus, is to just make the language resemble lambda calculus expressions to begin with, and the verification of the semantics will be simple and obvious.

In 1966 Peter Landin followed up his articles using λ-calculus to describe ALGOL with his article The Next 700 Programming Languages. Here he invents functional programming in the form of an invented language called ISWIM (If You See What I Mean), as you can see again with a good dry humour. The language is λ-calculus with “syntactic sugar” on top, so a broad family of languages are possible to create using the framework as a basis. Landin's article was popular, and people did invent languages. Maybe not 700 of them. Yet.

In section 10 of his article, named Eliminating explicit sequencing, Landin starts speculating and talks about a game that can be played with ALGOL: by removing any goto statements and labels, the program get a less sequential nature, i.e. the program counter is just advancing to the next line or iterating a loop. He quips:

What other such features are there? This question is considered because, not surprisingly, it turns out that an emphasis on describing things in terms of other things leads to the same kind of requirements as an emphasis against explicit sequencing.

He then goes on to show how to transform an ALGOL program into a purely functional ISWIM program and concludes:

The special claim of ISWlM is that it grafts procedural notions onto a purely functional base without disturbing many of the desirable properties. (...) This paper can do no more than begin the task of explaining their practical significance.

This reads as a call to action: we need to create functional programming languages akin to ISWIM, and we need to get rid of the J operator (the program control flow operator). Landin never did that himself.

The Meta Language ML

A few years later, in 1974, computer scientist Robin Milner, inspired by ISWIM and as a response to Landin's challenge, created the language ML, short for Meta Language. This is one of the 700 next languages and clearly recognized Landin's ideas about a language for defining languages, a grammar for defining grammar: a meta language with a meta grammar.

He implemented the language on the DEC10 computer with the help of Malcolm Newey, Lockwood Morris, Mike Gordon and Chris Wadswort. The language was later ported to the VAX architectures.

The language was based on ISWIM and dropped the so-called J operator (program point operator). It is domain-specific, and intended for authoring a tool for theorem proving called LCF. Standard ML has been fully semantically specified and formally verified. This language became widely popular, both in academia and industry.

Removing the J operator made ML a declarative language, i.e. it does not specify the order of execution of statements, putting it in the same class of languages as Prolog or for that matter: Makefiles: there is no control flow in a Makefile, just a number of conditions that need to be evaluated to arrive at a complete target.

ML still has one imperative language feature: assignment. Around this time, some scholars thought both the J operator and assignment were unnecessary and went on to define purely functional languages such as Haskell. We will not consider them here, they are outside the scope of this article. ML and everything else we discuss can be labelled as impure: a pejorative term invented by people who like purely functional languages. These people dislike not only the sequencing nature of imperative languages but also the assignment (such as happens with the keyword let) and prefer to think about evaluating relationships between abstract entities.

ML can be grasped intuitively. For example this expression in ML evaluates to the integer 64:

let
    val m : int = 4
    val n : int = m*m
in
    m*n
end

Here we see some still prominent AE/SECD, ISIWM features such as the keyword let for binding variables, or rather, associate names with elements such as integers and functions (similar to := assignment in some languages). The we see an implementation section in. We can define functions in ML, like this to compute the square root of five times x:

val rootfivex : real -> real =
    fn x : real => Math.sqrt (5.0 * x)

Notice absence of constructs such as BEGIN/END or semicolons. ML, like Python and other languages use whitespace to find beginning and end of basic blocks. The notation real –> real clearly states that the function takes a real number as input and produces a real number as output. The name real reflects some kind of mathematical ambition. The language cannot handle the mathematical set of real numbers — the ML real is what other languages call a float.

ML has more syntactic sugar, so the following is equivalent using the keyword fun (fun-notation):

fun rootfivex (x:real):real = Math.sqrt (5.0 * x)

The syntax should be possible to grasp intuitively. Another feature of ML and other functional languages is that they easily operate on tuples i.e. an ordered sequence of variables, and tuples can also be returned from functions. For example you can calculate the distance between origin and two coordinates in a x/y-oriented plane like this:

fun dist (x:real, y:real):real = Math.sqrt (x*x + y*y)

This function can then be called elsewhere like this:

val coor (x:real, y:real)
val d = dist(coor)

The type real of d will be inferred from the fact that the dist() function returns a real.

ML gets much more complex than this. One of the upsides of the language that is universally admired is that ML programs, like most programs written in functional languages can be proven correct in the computational sense. This can be done within certain ramifications: for example input/output operations need to specify exactly which values are input or an undefined behaviour will occur.

CAML and OCaml

In 1987 Ascánder Suárez at the French Institute for Research in Computer Science and Automation (INRIA) reimplemented a compiler and runtime system for ML in LISP and called the result CAML for Categorical Abstract Machine Language, a pun on the fact that it ran on a virtual machine (Category Abstract Machine) and the heritage from ML proper. The abstract machine used was the LLM3 abstract LISP machine, which in turn ran on another computer. It was not fast.

CAML was reimplemented in C in 1990-91 by Xavier Leroy, creating Caml Light, which was faster, because it was not written in a virtual machine running a virtual machine. Caml Light was more like Java and used a bytecode interpreter for its virtual machine.

In 1995, Caml Special Light introduced a native compiler, so the bytecode produced from the Caml compiler could be compiled to object code and executed with no virtual machine overhead, using a native runtime environment. Didier Rémy, Jérôme Vouillon and Jacques Garrigue continued the development of Caml.

Objective Caml arrived in 1996 and added some object oriented features to Caml. In 2011 the extended Caml Special Light compiler, and language derivative (dialect) of ML was renamed OCaml. In essence the compiler and language has a symbiotic relationship. There is no second implementation of OCaml.

From the 1990s and forward, what is now the OCaml language and implementation has gained traction. It is a very popular functional programming language, or rather, popular as far as functional programming goes. It has optimized implementations for most architectures. The compiler itself is now written mostly in OCaml, but the runtime in C is still around, to hook into each operating system where the program will eventually run. The language and compiler has been used for a variety of applications. Every major Linux distribution carries packages with the OCaml compiler and libraries. There is even a GTK+ 3 OCaml library binding, so OCaml GUI programs can be created.

OCaml simplifies binding labels to numbers etc, here is bubblesort implemented in OCaml:

(* Bubblesort in OCaml, Linus Walleij 2022 *)
let sort v =
  let newv = Array.make (Array.length v) 0 in
  for i = 1 to (Array.length v) - 1 do
    if v.(i - 1) > v.(i) then begin
      newv.(i - 1) <- v.(i);
      newv.(i) <- v.(i - 1);
      (* Copy back so we are working on the same thing *)
      v.(i - 1) <- newv.(i - 1);
      v.(i) <- newv.(i);
    end else begin
      newv.(i - 1) <- v.(i - 1);
      newv.(i) <- v.(i);
    end
  done;
  newv

let rec ordered v =
  if Array.length v = 0 then true
  else if Array.length v = 1 then true
  (* ... or if the rest of the array is ordered *)
  else if v.(0) < v.(1) && ordered (Array.sub v 1 (Array.length v - 1)) then true
  else false;;

let plist v =
  print_string "V = ";
  for i = 0 to (Array.length v) - 1 do begin
    print_int v.(i);
    if i < (Array.length v - 1) then print_string ",";
    end
  done;
  print_endline "";;

let rec sortme v =
  if ordered v then v
  else sortme (sort v);;

let v = [| 14 ; 4 ; 55 ; 100 ; 11 ; 29 ; 76 ; 19 ; 6 ; 82 ; 99 ; 0 ; 57 ; 36 ; 61 ; 30 |];;
plist v;;
plist (sortme v);;

My experience with working with this example is that OCaml makes a “bit of resistance” to changing contents of things like arrays by indexing. It “dislikes” any imperative constructs and kind of nudges you in the direction of purely logical constructs such as the ordered function above. This is just my personal take.

OCaml is still a dialect of ML. The file ending used on all files is .ml as well. OCaml – like Pythons pip or Perls CPAN has its own package system and library called opam. The prime application is still the OCaml Ergo Library, a library for automatic theorem proving. If your first and foremost use of computers is theorem proving, ML and OCaml continue to deliver since 1974. The more recent and widely popular Coq theorem prover is also written in OCaml.

Rust then

Rust was initially developed in 2006 as a hobby project by Graydon Hoare who was at the time working at Mozilla. OCaml and ML is mentioned as the biggest influence on the language, apart from C/C++. A typical sign of this influence would be that the first compiler for Rust was written in OCaml. A notable contributor to this codebase, apart from Hoare, is Brendan Eich, one of the founders of the Mozilla project and the inventor of JavaScript. While Brendan did not contribute much code he was at the time CTO of Mozilla, and this shows that when Mozilla started supporting the project in 2009 Rust was certainly well anchored in the organization, and Eich's early contributions to the language should be noted. (It may be commonplace that people in the CTO position at middle sized companies make commits to complex code bases, but I am not aware in that case.)

Despite the OCaml codebase the first documentation of the language talks more about other functional or declarative languages such as NIL, Hermes, Erlang, Sather, Newsqueak, Limbo and Napier. These origins with extensive quotes from e.g. Joe Armstrong (the inventor of Erlang) have been toned down in contemporary Rust documentation. It is however very clear that Graydon has a deep interest in historical computer languages and is convinced that they have something to teach us, and the expressed ambition is to draw on these languages to pick the best parts. In his own words:

I've always been a language pluralist — picture my relationship towards languages like a kid enjoying a wide variety of building blocks, musical instruments or plastic dinosaurs — and I don't think evangelism or single-language puritanism is especially helpful.

What is unique about Rust is that it fuses “impure” functional programming with imperative programming, bringing several concepts from ML and OCaml over into the language.

Another characteristic is that Rust compiled to target machine code from day one, rather than using any kind of virtual machine as did Peter Landins ISWIM, or the ML and OCaml languages (and as does say Java, or Python). Graydon probably did this intuitively, but a post he made in 2019 underscores the point: that virtual machines, even as an intermediate step, is bad language engineering and just generally a bad idea.

In 2013 Graydon stepped down as main lead for Rust for personal reasons which he has detailed in a posting on Reddit.

Rust has had the same symbiotic relationship between language and a single compiler implementation as OCaml, but this is changing, as there is now a second, GCC-based implementation in the works.

Here is bubblesort implemented in Rust:

/* Bubblesort in Rust, Linus Walleij 2022 */
fn sort(array : &mut [i32]) {
   let mut x : i32;
   if array.len() == 1 {
      return;
   }
   for i in 1..array.len() {
      if array[i - 1] > array[i] {
      	 x = array[i - 1];
	 array[i - 1] = array[i];
	 array[i] = x;
      }
   }
}

fn is_ordered(array : &[i32]) -> bool {
   if array.len() <= 1 {
     return true;
   }
   for i in 1..array.len() {
     if array[i - 1] > array[i] {
       return false;
     }
   }
   return true;
}

fn parray(array : &[i32]) {
   let mut x : i32;
   print!("V = ");
   for i in 0..array.len() {
       x = array[i];
       print!("{x}");
       if i != (array.len() - 1) {
       	  print!(",");
       }
   }
   println!("");
}

fn main() {
   let mut array: [i32; 16] = [14, 4, 55, 100, 11, 29, 76, 19, 6, 82, 99, 0, 57, 36, 61, 30];
   parray(&array);
   while !is_ordered(&array) {
     sort(&mut array);
   }
   parray(&array);
}

Rust leaves itself to easier imperative programming than OCaml: the keyword mut becomes quite similar to C:s const correctness tagging in this example. Since is_ordered and parray isn't altering the contents of the array these functions do not need to be marked with mut. You see some familiar virtues from Pascal: arrays “know” their length, and we use a method to obtain it: array.len().

The stated ambition is improved memory safety, data-race safety (concurrency) and type safety. The article Safe Systems Programming in Rust certainly presents the ambition in a straight-forward manner. Graydon also underscores the focus on memory and concurrency safety in a 2016 blog post.

But make no mistake. The current underlying ambition is definitely nothing different from the ambition of the ALGOL committee between 1958 and 1968: to raise the abstraction of the language through the ambition to join computer programming with formal logic. This comes from the arrival of strong academic support for the language.

A typical indication of this ambition is the well-funded RustBelt project involving a large amount of academic researchers, all familiar with formal logic, and resulting in such artefacts as Ralf Jung's PhD thesis Understanding and Evolving the Rust Programming Language. Here, formal logic in Rust Belt and the Coq proof assistant is used and concludes (from the abstract):

Together, these proofs establish that, as long as the only unsafe code in a well-typed λRust program is confined to libraries that satisfy their verification conditions, the program is safe to execute.

What is meant by “safe to execute” is that no use-after-free, dangling pointers, stale references, NULL pointer exceptions etc can ever occur in safe Rust code, because it is proven by formal logic: QED. It does not stop you from e.g. dividing by zero however, that problem is out-of-scope for the exercise.

To me personally the most astonishing fact about Jung's thesis is that it manages to repeatedly cite and reference the computer scientist Tony Hoare without quoting the inventor of the Rust language, Graydon Hoare, a single time. In a way it confirms Graydon's own statement that Rust “contains nothing new” from a language point of view.

The C programming language cannot be subject to the same scrutiny as Rust, simply because of all the (ab)use it allows, and which was mentioned by Wirth in his historical perspective: if a type can be changed by a cast and array indexing is not even part of the language, there is nothing much to prove. What has been interesting for scholars to investigate is a well-defined subset of C, such as the eBPF subset, which also partly explains the strong interest in eBPF: like with Rust, the build environment and language runtime has been defined with much stricter constraints and thus can be subject to formal verification.

The ambition of Rust is, as I perceieve it, and whether the people driving it even knows it or not, to finish what the ALGOL committe as primus motor started in 1958, and what the Garmisch NATO conference concluded was necessary in 1968: to develop a language for systems programming that rely on formal logic proof, and to fulfil what ALGOL never could, what Pascal never could, and what the whole maybe-not-700 functional programming languages never could: a language that joins the disciplines of computer science and software Engineering into ONE discipline, where the scholars of each can solve problems together.

That is the ambition of Rust as an implementation language for operating systems, such as Linux: provide a language backed by current top-of-the-line computer science research, for immediate application to software engineering developing the top-of-the-line operating system.

What it offers Linux is raised abstraction to counter the problems of complexity identified in the 1968 Garmisch NATO conference and now bleeding obvious given the recurring security incidents, and thereby would bring the engineering project Linux closer to computer science.

Other approaches to increased Linux (memory- concurrency-) safety are possible: notably increased testing, which is the engineering go-to panacea. And automated testing of Linux has indeed increased a lot in recent years. Raising the abstraction of the implementation language and proving it formally comes with the ambition to make testing less important.

[Mathieu Poirer and Jesper Jansson has helped out in reviewing this blog post, for which I am forever grateful: remaining errors, bugs and biased opinions are my own.]

 
Read more...

from Vlastimil Babka

In this post I would like to raise the awareness a bit about an effort to reduce the limitations of anonymous VMA merging, in the form of an ongoing master thesis by Jakub Matena, which I'm supervising. I suspect there might be userspace projects that would benefit and maybe their authors are aware of the current limitations and would welcome if they were relaxed, but they don't read the linux-mm mailing list – the last version of the RFC posted there is here

In a high-level summary, merging of anonymous VMAs in Linux generally happens as soon as they become adjacent in the address space and have compatible access protection bits (and also mempolicies etc.). However due to internal implementation details (involving VMA and page offsets) some operations such as mremap() that moves VMAs around the address space can cause anonymous VMAs not to merge even if everything else is compatible. This is then visible as extra entries in /proc/pid/maps that could be in theory be one larger entry, the associated larger memory and CPU overhead of VMA operations, or even hitting the limit of VMAs per process, set by the vm.max_map_count sysctl. A related issue is that mremap() syscall itself cannot currently process multiple VMAs, so a process that needs to further mremap() the non-merged areas would need to somehow learn the extra boundaries first and perform a sequence of multiple mremap()'s to achieve its goal.

Does any of the above sound familiar because you found that out already while working on a Linux application? Then we would love your feedback on the RFC linked above (or privately). The issue is that while in many scenarios the merging limitations can be lifted by the RFC, it doesn't come for free in both of some overhead of e.g. mremap(), and especially the extra complexity of an already complex code. Thus identifying workloads that would benefit a lot would be helpful. Thanks!

 
Read more...

from metan's blog

The new LTP release will include changes that have introduced concept of a test maximal runtime so let me briefly explain what exactly that is. To begin with let's make an observation about a LTP test duration. Most of the LTP tests do fall into two categories when duration of the test is considered. First type of tests is fast, generally under a second or two and most of the time even fraction of that. These tests mostly prepare simple environment, call a syscall or two, clean up and are done. The second type of tests runs for longer and their duration is usually counted in minutes. These tests include I/O stress test, various regression tests that are looping in order to hit a race, timer precision tests that have to sample time intervals and so on.

Historically in LTP the test duration was limited by a single value called timeout, that defaulted to a compromise of 5 minutes, which is the worst value for both classes of the tests. That is because it's clearly too long for short running tests and at the same time too short for significant fraction of the long running tests. This was clear just by checking the tests that actually adjusted the default timeout. Quite a few short running tests that were prone to deadlocks decreased the default timeout to a much shorter interval and at the same time quite a few long running tests did increase it as well.

But back at how the test duration was handled in the long running tests. The test duration for long running tests is usually bounded by a time limit as well as a limit on a number of iterations and the test exits on whichever is hit first. In order to exit the test before the timeout these tests watched the elapsed runtime and did exit the main loop if the runtime got close enough to the test timeout. The problem was that close enough was loosely defined and implemented in each test differently. That obviously leads to a different problems. For instance if test looped until there was 10 seconds left to the timeout and the test cleanup did take more than 10 seconds on a slower hardware, there was no way how to avoid triggering the timeout which resulted in test failure. If test timeout was increased the test simply run for longer duration and hit the timeout at the end either way. At the same time if the test did use proportion of the timeout left out for the test cleanup things didn't work out when the timeout was scaled down in order to shorten the test duration.

After careful analysis it became clear that the test duration has to be bound by a two distinct values. The new values are now called timeout and max_runtime and the test duration is bound by a sum of these two. The idea behind this should be clear to the reader at this point. The max_runtime limits the test active part, that is the part where the actual test loop is executed and the timeout covers the test setup and cleanup and all inaccuracies in the accounting. Each of them can be scaled separately which gives us enough flexibility to be able to scale from small embedded boards all the way up to the supercomputers. This change also allowed us to change the default test timeout to 30 seconds. And if you are asking yourself a question how max_runtime is set for short running tests the answer is simple it's set to zero since the default timeout is more than enough to cope with these.

All of this also helps to kill the misbehaving tests much faster since we have much better estimation for the expected test duration. And yes this is a big deal when you are running thousands of testcases, it may speed up the testrun quite significantly even with a few deadlocked tests.

But things does not end here, there is a bit of added complexity on the top of this. Some of the testcases will call the main test loop more than once. That is because we have a few “multipliers” flags that can increase test coverage quite a bit. For instance we have so called .all_filesystems flag, that when set, will execute the test on the top of the most commonly used filesystems. There is also flag that can run the test for a different variants, which is sometimes used to run the test for a more than one syscall variant, e.g. for clock_gettime() we run the same test for both syscall and VDSO. All these multipliers have to be taken into an account when overall test duration is computed. However we do have all these flags in the metadata file now hence we are getting really close to a state where we will have a tool that can compute an accurate upper bound for duration for a given test. However that is completely different story for a different short article.

 
Read more...

from Konstantin Ryabitsev

Once every couple of years someone unfailingly takes advantage of the following two facts:

  1. most large git hosting providers set up object sharing between forks of the same repository in order to save both storage space and improve user experience
  2. git's loose internal structure allows any shared object to be accessed from any other repository

Thus, hilarity ensues on a fairly regular basis:

Every time this happens, many wonder how come this isn't treated like a nasty security bug, and the answer, inevitably, is “it's complicated.”

Blobs, trees, commits, oh my

Under the hood, git repositories are a bunch of objects — blobs, trees, and commits. Blobs are file contents, trees are directory listings that establish the relationship between file names and the blobs, and commits are like still frames in a movie reel that show where all the trees and blobs were at a specific point in time. Each next commit refers to the hash of the previous commit, which is how we know in what order these still frames should be put together to make a movie.

Each of these objects has a hash value, which is how they are stored inside the git directory itself (look in .git/objects). When git was originally designed, over a decade ago, it didn't really have a concept of “branches” — there was just a symlink HEAD pointing to the latest commit. If you wanted to work on several things at once, you simply cloned the repository and did it in a separate directory with its own HEAD. Cloning was a very efficient operation, as through the magic of hardlinking, hundreds of clones would take up about as much room on your disk as a single one.

Fast-forward to today

Git is a lot more complicated these days, but the basic concepts are the same. You still have blobs, trees, commits, and they are all still stored internally as hashes. Under the hood, git has developed quite a bit over the past decade to make it more efficient to store and retrieve millions and tens of millions of repository objects. Most of them are now stored inside special pack files, which are organized rather similar to compressed video clips — formats like webm don't really store each frame in a separate image, as there is usually very little difference between any two adjacent frames. It makes much more sense to store just the difference (“delta”) between two still images until you come to a designated “key frame”.

Similarly, when generating pack files, git will try to calculate the deltas between objects and only store their incremental differences — at least until it decides that it's time to start from a new “key frame” just so checking out a tag from a year ago doesn't require replaying a year worth of diffs. At the same time, there has been a lot of work to make the act of pushing/pulling objects more efficient. When someone sends you a pull request and you want to review their changes, you don't want to download their entire tree. Your git client and the remote git server compare what objects they already have on each end, with the goal to send you just the objects that you are lacking.

Optimizing public forks

If you look at the GitHub links above, check out how many forks torvalds/linux has on that hosting service. Right now, that number says “41.1k”. With the best kinds of optimizations in place, a bare linux.git repository takes up roughtly 3 GB on disk. Doing quick math, if each one of these 41.1k forks were completely standalone, that would require about 125 TB of disk storage. Throw in a few hundred terabytes for all the forks of Chromium, Android, and Gecko, and soon you're talking Real Large Numbers. Which is why nobody actually does it this way.

Remember how I said that git forks were designed to be extremely efficient and reuse the objects between clones? This is how forks are actually organized on GitHub (and git.kernel.org, for that matter), except it's a bit more complicated these days than simply hardlinking the contents of .git/objects around.

On git.kernel.org side of things we store the objects from all forks of linux.git in a single “object storage” repository (see https://pypi.org/project/grokmirror/ for the gory details). This has many positive side-effects:

  • all of git.kernel.org, with its hundreds of linux.git forks takes up just 30G of disk space
  • when Linus merges his usual set of pull requests and performs “git push”, he only has to send a very small subset of those objects, because we probably already have most of them
  • similarly, when maintainers pull, rebase, and push their own forks, they don't have to send any of the objects back to us, as we already have them

Object sharing allows to greatly improve not only the backend infrastructure on our end, but also the experience of git's end-users who directly benefit from not having to push around nearly as many bits.

The dark side of object sharing

With all the benefits of object sharing comes one important downside — namely, you can access any shared object through any of the forks. So, if you fork linux.git and push your own commit into it, any of the 41.1k forks will have access to the objects referenced by your commit. If you know the hash of that object, and if the web ui allows to access arbitrary repository objects by their hash, you can even view and link to it from any of the forks, making it look as if that object is actually part of that particular repository (which is how we get the links at the start of this article).

So, why can't GitHub (or git.kernel.org) prevent this from happening? Remember when I said that a git repository is like a movie full of adjacent still frames? When you look at a scene in a movie, it is very easy for you to identify all objects in any given still frame — there is a street, a car, and a person. However, if I show you a picture of a car and ask you “does this car show up in this movie,” the only way you can answer this question is by watching the entire thing from the beginning to the end, carefully scrutinizing every shot.

In just the same way, to check if a blob from the shared repository actually belongs in a fork, git has to look at all that repository's tips and work its way backwards, commit by commit, to see if any of the tree objects reference that particular blob. Needless to say, this is an extremely expensive operation, which, if enabled, would allow anyone to easily DoS a git server with only a handful of requests.

This may change in the future, though. For example, if you access a commit that is not part of a repository, GitHub will now show you a warning message:

Looking up “does this commit belong in this repository” used to be a very expensive operation, too, until git learned to generate commit graphs (see man git-commit-graph). It is possible that at some point in the future a similar feature will land that will make it easy to perform a similar check for the blob, which will allow GitHub to show a similar warning when someone accesses shared blobs by their hash from the wrong repo.

Why this isn't a security bug

Just because an object is part of the shared storage doesn't really have any impact on the forks. When you perform a git-aware operation like “git clone” or “git pull,” git-daemon will only send the objects actually belonging to that repository. Furthermore, your git client deliberately doesn't trust the remote to send the right stuff, so it will perform its own connectivity checks before accepting anything from the server.

If you're extra paranoid, you're encouraged to set receive.fsckObjects for some additional protection against in-flight object corruption, and if you're really serious about securing your repositories, then you should set up and use git object signing:

This is, incidentally, also how you would be able to verify whether commits were made by the actual Linus Torvalds or merely by someone pretending to be him.

Parting words

This neither proves nor disproves the identity of “Satoshi.” However, given Linus's widely known negative opinions of C++, it's probably not very likely that it's the language he'd pick to write some proof of concept code.

 
Read more...

from metan's blog

Unfortunately FOSDEM is going to be virtual again this year, but that does not stop us from organizing the testing and automation devroom. Have a look at our CfP and if you have something interesting to present go ahead and fill in a submission!

 
Read more...

from nmenon

One of the cool things with kernel.org is the fact that we can rotate maintainership depending on workload. So, https://git.kernel.org/pub/scm/linux/kernel/git/nmenon/linux.git/ is now my personal tree and we have picked up https://git.kernel.org/pub/scm/linux/kernel/git/ti/linux.git/ as a co-maintained TI tree that Vignesh and I rotate responsibilities with Tony Lindgren and Tero in backup.

Thanks to Konstantin and Stephen in making this happen.!

NOTE: No change in Tony's tree @ https://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap.git/

 
Read more...

from Konstantin Ryabitsev

This is the second installment in the series where we're looking at using the public-inbox lei tool for interacting with remote mailing list archives such as lore.kernel.org. In the previous article we looked at delivering your search results locally, and today let's look at doing the same, but with remote IMAP folders. For feedback, send a follow-up to this message on the workflows list:

For our example query today, we'll do some stargazing. The following will show us all mail sent by Linus Torvalds:

f:torvalds AND rt:1.month.ago..

I'm mostly using it because it's short, but you may want to use something similar if you have co-maintainer duties and want to automatically receive a copy of all mail sent by your fellow subsystem maintainers.

Note on saving credentials

When accessing IMAP folders, lei will require a username and password. Unless you really like typing them in manually every time you run lei up, you will probably want to have them cached on your local system. Lei will defer to git-credential-helper for this purpose, so if you haven't already set this up, you will want to do that now.

The two commonly used credential storage backends on Linux are “libsecret” and “store”:

  • libsecret is the preferred mechanism, as it will work with your Desktop Environment's keyring manager to store the credentials in a relatively safe fashion (encrypted at rest).

  • store should only be used if you don't have any other option, as it will record the credentials without any kind of encryption in the ~/.git-credentials file. However, if nothing else is working for you and you are fairly confident in the security of your system, it's something you can use.

Simply run the following command to configure the credential helper globally for your environment:

git config --global credential.helper libsecret

For more in-depth information about this topic, see man git-credential.

Getting your IMAP server ready

Before you start, you should get some information about your IMAP server, such as your login information. For my examples, I'm going to use Gmail, Migadu, and a generic Dovecot IMAP server installation, which should hopefully cover enough ground to be useful for the vast majority of cases.

What you will need beforehand:

  • the IMAP server hostname and port (if it's not 993)
  • the IMAP username
  • the IMAP password

It will also help to know the folder hierarchy. Some IMAP servers create all subfolders below INBOX, while others don't really care.

Generic Dovecot

We happen to be running Dovecot on mail.codeaurora.org, so I'm going to use it as my “generic Dovecot” system and run the following command:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://mail.codeaurora.org/INBOX/torvalds \
  <<< 'f:torvalds AND rt:1.month.ago..'

The <<< bit above is a Bash-ism, so if you're using a different shell, you can use the POSIX-compliant heredoc format instead:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://mail.codeaurora.org/INBOX/torvalds <<EOF
f:torvalds AND rt:1.month.ago..
EOF

The first time you run it, you should get a username: and password: prompt, but after that the credentials should be cached and no longer required on each repeated access to the same imaps server.

NOTE: some IMAP servers use the dot . instead of the slash / for indicating folder hierarchy, so if INBOX/torvalds is not working for you, try INBOX.torvalds instead.

Refreshing and subscribing to IMAP folders

If the above command succeeded, then you should be able to view the IMAP folder in your mail client. If you cannot see torvalds in your list of available folders, then you may need to refresh and/or subscribe to the newly created folder. The process will be different for every mail client, but it shouldn't be too hard to find.

The same with Migadu

If you have a linux.dev account (see https://korg.docs.kernel.org/linuxdev.html), then you probably already know that we ask you not to use your account for subscribing to busy mailing lists. This is due to Migadu imposing soft limits on how much incoming email is allowed for each hosted domain — so using lei + IMAP is an excellent alternative.

To set this up with your linux.dev account (or any other account hosted on Migadu), use the following command:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://imap.migadu.com/lei/torvalds \
  <<< 'f:torvalds AND rt:1.month.ago..'

Again, you will need to subscribe to the new lei/torvalds folder to see it in your mail client.

The same with Gmail

If you are a Gmail user and aren't already using IMAP, then you will need to jump through a few additional hoops before you are able to get going. Google is attempting to enhance the security of your account by restricting how much can be done with just your Google username and password, so services like IMAP are not available without setting up a special “app password” that can only be used for mail access.

Enabling app passwords requires that you first enable 2-factor authentication, and then generate a random app password to use with IMAP. Please follow the process described in the following Google document: https://support.google.com/mail/answer/185833

Once you have the app password for use with IMAP, you can use lei and imaps just like with any other IMAP server:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://imap.gmail.com/lei/torvalds \
  <<< 'f:torvalds AND rt:1.month.ago..'

It requires a browser page reload for the folder to show up in your Gmail web UI.

Automating lei up runs

If you're setting up IMAP access, then you probably want IMAP updates to happen behind the scenes without your direct involvement. All you need to do is periodically run lei up --all (plus -q if you don't want non-critical output).

If you're just getting started, then you can set up a simple screen session with a watch command at a 10-minute interval, like so:

watch -n 600 lei up --all

You can then detach from the screen terminal and let that command continue behind the scenes. The main problem with this approach is that it won't survive a system reboot, so if everything is working well and you want to make the command a bit more permanent, you can set up a systemd user timer.

Here's the service file to put in ~/.config/systemd/user/lei-up-all.service:

[Unit]
Description=lei up --all service
ConditionPathExists=%h/.local/share/lei

[Service]
Type=oneshot
ExecStart=/usr/bin/lei up --all -q

[Install]
WantedBy=mail.target

And the timer file to put in ~/.config/systemd/user/lei-up-all.timer:

[Unit]
Description=lei up --all timer
ConditionPathExists=%h/.local/share/lei

[Timer]
OnUnitInactiveSec=10m

[Install]
WantedBy=default.target

Enable the timer:

systemctl --user enable --now lei-up-all.timer

You can use journalctl -xn to view the latest journal messages and make sure that the timer is running well.

CAUTION: user timers only run when the user is logged in. This is not actually that bad, as your keyring is not going to be unlocked unless you are logged into the desktop session. If you want to run lei up as a background process on some server, you should set up a system-level timer and use a different git-credential mechanism (e.g. store) — and you probably shouldn't do this on a shared system where you have to worry about your account credentials being stolen.

Coming up next

In the next installment we'll look at some other part of lei and public-inbox... I haven't yet decided which. :)

 
Read more...

from metan's blog

As usual we had a LTP release at the end of the September. What was unusual though is the number of patches that went it, we got 483 patches, which is about +150 than the last three releases. And the number of patches did slowly grow even before that.

While it's great and I'm happy that the project is growing, there is a catch, grow like this puts additional strain on the maintainers, particularly on the patch reviewers. For me it was +120 patches reviewed during the four months period and that only counts the final versions of patches that were accepted to the repository, it's not unusual to have three or more revisions before the work is ready to be merged.

While I managed to cope with it reasonably fine the work that I had on TODO for the project was stalled. One of the things I finally want to move forward is making the runltp-ng official LTP test runner, but there is much more. So the obvious question is how to make things better and one of the things we came up was automation.

What we implemented for LTP is 'make check' that runs different tools on the test source code that is supposed to be used before patch is sent for a review. For C code we use the well known checkpatch.pl and custom sparse based checker to identify most common problems. The tooling is set up automatically when you call 'make check' for a first time and we tried to make it as effortless as possible, so that there is no reason not to use during the development. We also use checkbashism.pl for shell code and hopefully the number of checks will grow over the time. Hopefully this should eliminate on average at least one revision for a patchset which would be hundreds of patches during our development cycle.

Ideally this will fix the problem for a while and we will make more effective use of our resources, but eventually we will get to a point where more maintainers and reviewers are needed, which is problem that is hard to solve without your help.

 
Read more...

from metan's blog

We have reached an important milestone with latest LTP release – the amount of testcases written in the new test library finally outnumbers the amount of old library tests. Which is nice opportunity for a small celebration and also to look back a bit into a history and try to summarize what has happened over the last 10 years in LTP.

I've joined LTP development a bit more than 10 years ago in 2009. At that point we were really struggling with the basics. The build system was collection of random Makefiles and the build often failed for very random reasons. The were pieces of shell code embedded in Makefiles for instance to check for devel libraries, manually written shell loops over directories that prevented parallel build, and all kind of ugly mess like that. This has changed and at the end of 2009 as the build system was rewritten, with that LTP supported proper parallel build, started to use autoconf for feature checks, etc. We also switched from CVS to GIT at the end of the 2009, which was huge improvement as well.

However that was only a start, LTP was easier to build, git was nicer to use, but we still had tests that were mostly failing and fair amount of the tests were producing nothing but noise. There were also tests that didn't produce real results and always passed but it's really hard to count these unless you review the code carefully one testcase at a time, which is part of what we still do even after ten years of work.

From that point on it took us a few years to clear the worst parts and to deal with most of the troublemakers and the results from LTP were gradually getting greener and more stable as well. We are far from being bugless, there are still parts covered in dust that are waiting for attention, but we are getting there. For instance in this release we finally got a nice cgroup test library that simplifies cgroup testcases and we should fix rest of the cgroup tests ideally before the next one. Also I'm quite happy that the manpower put into LTP development slowly increases, however compared to the efforts put into the kernel development the situation is still dire. I used to tell people that the amount of work put into Linux automated testing is a bad joke back then. These days it's much better but still hardly optimal as we struggle to keep up with covering the newly introduced kernel features.

At the start I've mentioned new test library so I should explain how we came to this and why it's superior to what we had previously. First of all there was a test library in LTP that could be traced back to SGI and was released under GPL more than 20 years ago, it's probably even older than that though. The main problems with the library was that it was cumbersome to use. There were some API reporting functions, but these were not thread safe nor could be used in child processes. You had to propagate test results manually in these two cases which was prone to errors. Even worse since the test implemented the main() function you had to return the overall result manually as well and forgetting to do so was one of the common mistakes. At a point where most of the broken tests were finally fixed I had a bit of time to invest into a future and after seven years of dealing with a common test mistakes and I had a pretty good picture of what a test library should look like and what should be avoided. Hence I've sat down and designed library that is nice and fun to use and makes tests much easier to write. This library still evolves over the time, the version introduced in 2016 wasn't as nice as it is now, but even when it was introduced it included the most important bits, for instance thread safe and automatic test result propagation or synchronization primitives that could be used even to synchronize shell code against C binary.

The old library is still present in LTP since we are a bit more than halfway done converting the tests, which is no easy task since we have still more than 600 tests to go. And as we are converting the test we are also reviewing them to ensure that the assertions are correct and the coverage isn't lacking. We still find tests that fail to report results from time to time even now, which only show how hard is to eliminate mistakes like this and why preventing them in the first place is right thing to do. And if things will go well the rest of tests should be converted in about 5 years and LTP should be finally free of the historical baggage. At that point I guess that I will throw a small celebration since that would conclude a huge task I've been working on for a decade now.

 
Read more...

from linusw

This is a retrospect of my work on the KASan Kernel Address Sanitizer for the ARM32 platform. The name is a pun on the diving decompression stop that is something you perform after going down below the surface to avoid decompression sickness.

Where It All Began

The AddressSanitizer (ASan) is a really clever invention by Google, hats off. It is one of those development tools that like git just take on the world in a short time. It was invented by some smart russians, especially Андрей Коновалов (Andrey Konovalov) and Дмитрий Вьюков (Dmitry Vyukov). It appears to be not just funded by Google but also part of a PhD thesis work.

The idea with ASan is to help ensure memory safety by intercepting all memory accesses through compiler instrumentation, and consequently providing “ASan splats” (runtime problem detections) while stressing the code. Code instrumented with ASan gets significantly slower than normal and uses up a bunch of memory for “shadowing” (I will explain this) making it a pure development tool: it is not intended to be enabled on production systems.

The way that ASan instruments code is by linking every load and store into symbols like these:

__asan_load1(unsigned long addr);
__asan_store1(unsigned long addr);
__asan_load2(unsigned long addr);
__asan_store2(unsigned long addr);
__asan_load4(unsigned long addr);
__asan_store4(unsigned long addr);
__asan_load8(unsigned long addr);
__asan_store8(unsigned long addr);
__asan_load16(unsigned long addr);
__asan_store16(unsigned long addr);

As you can guess these calls loads or stores 1, 2, 4, 8 or 16 bytes of memory in a chunk into the virtual address addr and reflects how the compiler thinks that the compiled code (usually C) thinks about these memory accesses. ASan intercepts all reads and writes by placing itself between the executing program and any memory management. The above symbols can be implemented by any runtime environment. The address will reflect what the compiler assumed runtime environment thinks about the (usually virtual) memory where the program will execute.

You will instrument your code with ASan, toss heavy test workloads on the code, and see if something breaks. If something breaks, you go and investigate the breakage to find the problem. The problem will often be one or another instance of buffer overflow, out-of-bounds array or string access, or the use of a dangling pointer such as use-after-free. These problems are a side effect of using the C programming language.

When resolving the mentioned load/store symbols, ASan instrumentation is based on shadow memory, and this is on turn based on the idea that one bit in a single byte “shadows” 8 bytes of memory, so you allocate 1/8 the amount of memory that your instrumented program will use and shadow that to some other memory using an offset calculation like this:

shadow = (address >> 3) + offset

The shadow memory is located at offset, and if our instrumented memory is N bytes then we need to allocate N/8 = N >> 3 bytes to be used as shadow memory. Notice that I say instrumented memory not code: ASan shadows not only the actual compiled code but mainly and most importantly any allocations and referenced pointers the code maintains. Also the DATA (contants) and BSS (global variables) part of the executable image are shadowed. To achive this the userspace links to a special malloc() implementation that overrides the default and manages all of this behind the scenes. One aspect of it is that malloc() will of course return chunks of memory naturally aligned to 8, so that the shadow memory will be on an even byte boundary.

ASan shadow memory The ASan shadow memory shadows the memory you're interested in inspecting for memory safety.

The helper library will allocate shadow memory as part of the runtime, and use it to shadow the code, data and runtime allocations of the program. Some will be allocated up front as the program is started, some will be allocated to shadow allocations at runtime, such as dynamically allocated buffers or anything else you malloc().

The error detection was based on the observation that a shadowing byte with each bit representing an out-of-bounds access error can have a “no error” state (0x00) and 8 error states, in total 9 states. Later on a more elaborate scheme was adopted. Values 1..7 indicate how many of the bytes are valid for access (if you malloc() just 5 bytes then it will be 5) and then there are magic bytes for different conditions.

When a piece of memory is legally allocated and accessed the corresponding bits are zeroed. Uninitialized memory is “poisoned”, i.e. set to a completely illegal value != 0. Further SLAB allocations are padded with “red zones” poisoning memory in front and behind of every legal allocation. When accessing a byte in memory, it is easy to verify that the access is legal: is the shadow byte == 0? That means all 8 bytes can be freely accessed and we can quickly proceed. Else we need a closer look. Values 1 thru 7 means bytes 1 thru 7 are valid for access (partly addressable) so we check that and any other values means uh oh.

  • 0xFA and 0xFB means we have hit a heap left/right redzone so an out-of-bounds access has happened
  • 0xFD means access to a free:ed heap region, so use-after-free
  • etc

Decoding the hex values gives a clear insight into what access violation we should be looking for.

To be fair the one bit per byte (8-to-1) mapping is not compulsory. This should be pretty obvious. Other schemes such as mapping even 32 bytes to one byte have been discussed for memory-constrained systems.

All memory access calls (such as any instance of dereferencing a pointer) and all functions in the library such as all string functions are patched to check for these conditions. It's easy when you have compiler instrumentation. We check it all. It is not very fast but it's bareable for testing.

Researchers in one camp argue that we should all be writing software (including operating systems) in the programming language Rust in order to avoid the problems ASan is trying to solve altogether. This is a good point, but rewriting large existing software such as the Linux kernel in Rust is not seen as realistic. Thus we paper over the problem instead of using the silver bullet. Hybrid approaches to using Rust in kernel development are being discussed but so far not much has materialized.

KASan Arrives

The AddressSanitizer (ASan) was written with userspace in mind, and the userspace project is very much alive.

As the mechanism used by ASan was quite simple, and the compilers were already patched to handle shadow memory, the Kernel Address Sanitizer (KASan) was a natural step. At this point (2020) the original authors seem to spend a lot of time with the kernel, so the kernel hardening project has likely outgrown the userspace counterpart.

The magic values assigned to shadow memory used by KASan is different:

  • 0xFA means the memory has been free:ed so accessing it means use-after-free.
  • 0xFB is a free:ed managed resources (devm_* accessors) in the Linux kernel.
  • 0xFC and 0xFE means we access a kmalloc() redzone indicating an out-of-bounds access.

This is why these values often occur in KASan splats. The full list of specials (not very many) can be found in mm/kasan/kasan.h.

The crucial piece to create KASan was a compiler flag to indicate where to shadow the kernel memory: when the kernel Image is linked, addresses are resolved to absolute virtual memory locations, and naturally all of these, plus the area where kernel allocates memory (SLABs) at runtime need to be shadowed. As can be seen in the kernel Makefile.kasan include, this boils down to passing the flags -fsanitize=kernel-address and -asan-mapping-offset=$(KASAN_SHADOW_OFFSET) when linking the kernel.

The kernel already had some related tools, notably kmemcheck which can detect some cases of use-after-free and references to uninitialized memory. It was based on a slower mechanism so KASan has since effectively superceded it, as kmemcheck was removed.

KASan was added to the kernel in a commit dated february 2015 along with support for the x86_64 architecture.

To exercise the kernel to find interesting bugs, the inventors were often using syzkaller, a tool similar to the older Trinity: it bombs the kernel with fuzzy system calls to try to provoke undefined and undesired behaviours yielding KASan splats and revealing vulnerabilities.

Since the kernel is the kernel we need to explicitly assign memory for shadowing. Since we are the kernel we need to do some manouvers that userspace can not do or do not need to do:

  • During early initialization of the kernel we point all shadow memory to a single page of just zeroes making all accesses seem fine until we have proper memory management set up. Userspace programs do not need this phase as “someone else” (the C standard library) handles all memory set up for them.
  • Memory areas which are just big chunks of code and data can all point to a single physical page with poison. In the virtual memory it might look like kilobytes and megabytes of poison bytes but it all points to the same physical page of 4KB.
  • We selectively de-instrument code as well: code like KASan itself, the memory manager per se, or the code that patches the kernel for ftrace, or the code that unwinds the stack pointer for a kernel splat clearly cannot be instrumented with KASan: it is part of the design of these facilities to poke around at random locations in memory, it's not a bug. Since KASan was added all of these sites in the generic kernel code have been de-instrumented, more or less.

Once these generic kernel instrumentations were in place, other architectures could be amended with KASan support, with ARM64 following x86 soon in the autumn of 2015.

Some per-architecture code, usually found in arch/xxxx/mm/kasan_init.c is needed for KASan. What this code does is to initalize the shadow memory during early initialization of the virtual memory to point to a “zero page” and later on to populate all the shadow memory with poisoned shadow pages.

The shadow memory is special and needs to be populated accessing the very lowest layer of the virtual memory abstraction: we manipulate the page tables from the top to bottom: pgd, p4d, pud, pmd, pte to make sure that the $(KASAN_SHADOW_OFFSET) points to memory that has valid page table entries.

We need to use the kernel memblock early memory management to set up memory to hold the page tables themselves in some cases. The memblock memory manager also provide us with a list of all the kernel RAM: we loop over it using for_each_mem_range() and populate the shadow memory for each range. As mentioned we first point all shadows to a zero page, and later on to proper KASan shadow memory, and then KASan kicks into action.

A special case happens when moving from using the “zero page” KASan memory to proper shadow memory: we would risk running kernel threads into partially initialized shadow memory and pull the ground out under ourselves. Not good. Therefore the global page table for the entire kernel (the one that has all shadow memory pointing to a zero page) is copied and used during this phase. It is then replaced, finally, with the proper KASan-instrumented page table with pointers to the shadow memory in a single atomic operation.

Further all optimized memory manipulation functions from the standard library need to be patched as these often have assembly-optimized versions in the architecture. This concerns memcpy(), memmove() and memset() especially. While the quick optimized versions are nice for production systems, we replace these with open-coded variants that do proper memory accesses in C and therefore will be intercepted by KASan.

All architectures follow this pattern, more or less. ARM64 supports hardware tags, which essentially means that the architecture supports hardware acceleration of KASan. As this is pretty fast, there is a discussion about using KASan even on production systems to capture problems seen very seldom.

KASan on ARM32

Then there was the attempt to provide KASan for ARM32.

The very first posting of KASan in 2014 was actually targeting x86 and ARM32 and was already working-kind-of-prototype-ish on ARM32. This did not proceed. The main reason was that when using modules, these are loaded into a designated virtual memory area rather than the kernel “vmalloc area” which is the main area used for memory allocations and what most architectures use. So when trying to use loadable modules the code would crash as this RAM was not shadowed.

The developers tried to create the option to move modules into the vmalloc area and enable this by default when using KASan to work around this limitation.

The special module area is however used for special reasons. Since it was placed in close proximity to the main kernel TEXT segment, the code could be accessed using short jumps rather than long jumps: no need to load the whole 32-bit program counter anew whenever a piece of code loaded from a module was accessed. It made code in modules as quick as normal compiled-in kernel code +/– cache effects. This provided serious performance benefits.

As a result KASan support for ARM was dropped from the initial KASan proposal and the scope was limited to x86, then followed by ARM64. “We will look into this later”.

In the spring of 2015 I started looking into KASan and was testing the patches on ARM64 for Linaro. In june I tried to get KASan working on ARM32. Andrey Ryabinin pointed out that he actually had KASan running on ARM32. After some iterations we got it working on some ARM32 platforms and I was successfully stressing it a bit using the Trinity syscall fuzzer. This version solved the problem of shadowing the loadable modules by simply shadowing all that memory as well.

The central problem with running KASan on a 32-bit platform as opposed to a 64-bit platform was that the simplest approach used up 1/8 of the whole address space which was not a problem for 64-bit platforms that have ample virtual address space available. (Notice that the amount of physical memory doesn't really matter, as KASan will use the trick to just point large chunks of virtual memory to a single physical poison page.) On 32-bit platforms this approach ate our limited address space for lunch.

We were setting aside several static assigned allocations in the virtual address space, so we needed to make sure that we only shadow the addresses actually used by the kernel. We would not need to shadow the addresses used by userspace and the shadow memory virtual range requirement could thus be shrunk from 512 MB to 130 MB for the traditional 3/1 GB kernel/userspace virtual address split used on ARM32. (To understand this split, read my article How the ARM32 Kernel Starts which tries to tell the story.)

Sleeping Beauty

This more fine-grained approach to assigning shadow memory would create some devil-in-the-details bugs that will not come out if you shadow the whole virtual address space, as the 64-bit platforms do.

A random access to some memory that should not be poked (and thus lacking shadow memory) will lead to a crash. While QEMU and some hardware was certainly working, there were some real hardware platforms that just would not boot. Things were getting tedious.

KASan for ARM32 development ground to a halt because we were unable to hash out the bugs. The initial patches from Andrey started trading hands and these out-of-tree patches were used by some vendors to test code on some hardware.

Personally, I had other assignments and could not take over and develop KASan at this point. I'm not saying that I was necessarily a good candidate at the time either, I was just testing and tinkering with KASan because ARM32 vendors had a generic interest in the technology.

As a result, KASan for ARM32 was pending out-of-tree for almost 5 years. In 2017 Abbot Liu was working on it and fixed up the support for LPAE (large physical address extension) and in 2019 Florian Fainelli picked up where Abbot left off.

Some things were getting fixed along the road, but what was needed was some focused attention and these people had other things on their plate as well.

Finally Fixing the Bugs

In April 2020 I decided to pick up the patches and have a go at it. I sloppily named my first iteration “v2” while it was something like v7.

I quickly got support from two key people: Florian Fainelli and Ard Biesheuvel. Florian had some systems with the same odd behaviour of just not working as my pet Qualcomm APQ8060 DragonBoard that I had been using all along for testing. Ard was using the patches for developing and debugging things like EFI and KASLR.

During successive iterations we managed to find and patch the remaining bugs, one by one:

  • A hard-coded bitmask assuming thread size order to be 1 (4096 bytes) on ARMv4 and ARMv5 silicon made the kernel crash when entering userspace. KASan increases the thread order so that there would be space for redzones before and after allocations, so it needed more space. After reading assembly one line at the time I finally figured this out and patched it.
  • The code was switching MMU table by simply altering the TTBR0 register. This worked in some machines, especially ARMv7 silicon, but the right way to do it was to use the per-CPU macro cpu_switch_mm() which looks intuitive but is an ARM32-ism which is why the original KASan authors didn't know about it. This macro accounts for tiny differences between different ARM cores, some even custom to certain vendors.
  • Something fishy was going on with the attached device tree. It turns out, after much debugging, that the attached devicetree could end up in memory that was outside of the kernel 1:1 physical-to-virtual mapping. The page table entries that would have translated the physical memory area where the device tree was stored was wiped clean yielding a page fault. The problem was not caused by KASan per se: it was a result of the kernel getting over a certain size, and all the instrumentation added to the kernel makes it bigger to the point that it revealed the bug. We were en route to fix a bug related to big compressed kernel images. I developed debugging code specifically to find this bug and then made a patch for this making sure not to wipe that part of the mapping. (This post gives a detailed explanation of the problem.) Ard quickly came up with a better fix: let's move the device tree to determined place in the fixed mappings and handle it as if it was a ROM.

These were the major roadblocks. Fixing these bugs created new bugs which we also fixed. Ard and Florian mopped up the fallout.

In the middle of development, five level page tables were introduced and Mike Rapoport made some generic improvements and cleanup to the core memory management code, so I had to account for these changes as well, effectively rewriting the KASan ARM32 shadow memory initialization code. At one point I also broke the LPAE support and had to repair it.

Eventually the v16 patch set was finalized in october 2020 and submitted to Russell Kings patch tracker and he merged it for Linux v5.11-rc1.

Retrospect

After the fact three things came out nice in the design of KASan for ARM32:

  • We do not shadow or intercept highmem allocations, which is nice because we want to get rid of highmem altogether.
  • We do not shadow the userspace memory, which is nice because we want to move userspace to its own address space altogether.
  • Personally I finally got a detailed idea of how the ARM32 kernel decompresses and starts, and the abstract concepts of highmem, lowmem, and the rest of those wild animals. I have written three different articles on this blog as a result, with ideas for even more of them. By explaining how things work to others I realize what I can't explain and as a result I go and research it.

Andrey and Dmitry has since worked on not just ASan and KASan but also on what was intially called the KernelThreadSanitizer (KTSAN) but which was eventually merged under the name KernelConcurrencySanitizer (KCSAN). The idea is again to use shadow memory, but now for concurrency debugging at runtime. I do not know more than this.

 
Read more...