people.kernel.org

Reader

Read the latest posts from people.kernel.org.

from Vlastimil Babka

In this post I would like to raise the awareness a bit about an effort to reduce the limitations of anonymous VMA merging, in the form of an ongoing master thesis by Jakub Matena, which I'm supervising. I suspect there might be userspace projects that would benefit and maybe their authors are aware of the current limitations and would welcome if they were relaxed, but they don't read the linux-mm mailing list – the last version of the RFC posted there is here

In a high-level summary, merging of anonymous VMAs in Linux generally happens as soon as they become adjacent in the address space and have compatible access protection bits (and also mempolicies etc.). However due to internal implementation details (involving VMA and page offsets) some operations such as mremap() that moves VMAs around the address space can cause anonymous VMAs not to merge even if everything else is compatible. This is then visible as extra entries in /proc/pid/maps that could be in theory be one larger entry, the associated larger memory and CPU overhead of VMA operations, or even hitting the limit of VMAs per process, set by the vm.max_map_count sysctl. A related issue is that mremap() syscall itself cannot currently process multiple VMAs, so a process that needs to further mremap() the non-merged areas would need to somehow learn the extra boundaries first and perform a sequence of multiple mremap()'s to achieve its goal.

Does any of the above sound familiar because you found that out already while working on a Linux application? Then we would love your feedback on the RFC linked above (or privately). The issue is that while in many scenarios the merging limitations can be lifted by the RFC, it doesn't come for free in both of some overhead of e.g. mremap(), and especially the extra complexity of an already complex code. Thus identifying workloads that would benefit a lot would be helpful. Thanks!

 
Read more...

from metan's blog

The new LTP release will include changes that have introduced concept of a test maximal runtime so let me briefly explain what exactly that is. To begin with let's make an observation about a LTP test duration. Most of the LTP tests do fall into two categories when duration of the test is considered. First type of tests is fast, generally under a second or two and most of the time even fraction of that. These tests mostly prepare simple environment, call a syscall or two, clean up and are done. The second type of tests runs for longer and their duration is usually counted in minutes. These tests include I/O stress test, various regression tests that are looping in order to hit a race, timer precision tests that have to sample time intervals and so on.

Historically in LTP the test duration was limited by a single value called timeout, that defaulted to a compromise of 5 minutes, which is the worst value for both classes of the tests. That is because it's clearly too long for short running tests and at the same time too short for significant fraction of the long running tests. This was clear just by checking the tests that actually adjusted the default timeout. Quite a few short running tests that were prone to deadlocks decreased the default timeout to a much shorter interval and at the same time quite a few long running tests did increase it as well.

But back at how the test duration was handled in the long running tests. The test duration for long running tests is usually bounded by a time limit as well as a limit on a number of iterations and the test exits on whichever is hit first. In order to exit the test before the timeout these tests watched the elapsed runtime and did exit the main loop if the runtime got close enough to the test timeout. The problem was that close enough was loosely defined and implemented in each test differently. That obviously leads to a different problems. For instance if test looped until there was 10 seconds left to the timeout and the test cleanup did take more than 10 seconds on a slower hardware, there was no way how to avoid triggering the timeout which resulted in test failure. If test timeout was increased the test simply run for longer duration and hit the timeout at the end either way. At the same time if the test did use proportion of the timeout left out for the test cleanup things didn't work out when the timeout was scaled down in order to shorten the test duration.

After careful analysis it became clear that the test duration has to be bound by a two distinct values. The new values are now called timeout and max_runtime and the test duration is bound by a sum of these two. The idea behind this should be clear to the reader at this point. The max_runtime limits the test active part, that is the part where the actual test loop is executed and the timeout covers the test setup and cleanup and all inaccuracies in the accounting. Each of them can be scaled separately which gives us enough flexibility to be able to scale from small embedded boards all the way up to the supercomputers. This change also allowed us to change the default test timeout to 30 seconds. And if you are asking yourself a question how max_runtime is set for short running tests the answer is simple it's set to zero since the default timeout is more than enough to cope with these.

All of this also helps to kill the misbehaving tests much faster since we have much better estimation for the expected test duration. And yes this is a big deal when you are running thousands of testcases, it may speed up the testrun quite significantly even with a few deadlocked tests.

But things does not end here, there is a bit of added complexity on the top of this. Some of the testcases will call the main test loop more than once. That is because we have a few “multipliers” flags that can increase test coverage quite a bit. For instance we have so called .all_filesystems flag, that when set, will execute the test on the top of the most commonly used filesystems. There is also flag that can run the test for a different variants, which is sometimes used to run the test for a more than one syscall variant, e.g. for clock_gettime() we run the same test for both syscall and VDSO. All these multipliers have to be taken into an account when overall test duration is computed. However we do have all these flags in the metadata file now hence we are getting really close to a state where we will have a tool that can compute an accurate upper bound for duration for a given test. However that is completely different story for a different short article.

 
Read more...

from Konstantin Ryabitsev

Once every couple of years someone unfailingly takes advantage of the following two facts:

  1. most large git hosting providers set up object sharing between forks of the same repository in order to save both storage space and improve user experience
  2. git's loose internal structure allows any shared object to be accessed from any other repository

Thus, hilarity ensues on a fairly regular basis:

Every time this happens, many wonder how come this isn't treated like a nasty security bug, and the answer, inevitably, is “it's complicated.”

Blobs, trees, commits, oh my

Under the hood, git repositories are a bunch of objects — blobs, trees, and commits. Blobs are file contents, trees are directory listings that establish the relationship between file names and the blobs, and commits are like still frames in a movie reel that show where all the trees and blobs were at a specific point in time. Each next commit refers to the hash of the previous commit, which is how we know in what order these still frames should be put together to make a movie.

Each of these objects has a hash value, which is how they are stored inside the git directory itself (look in .git/objects). When git was originally designed, over a decade ago, it didn't really have a concept of “branches” — there was just a symlink HEAD pointing to the latest commit. If you wanted to work on several things at once, you simply cloned the repository and did it in a separate directory with its own HEAD. Cloning was a very efficient operation, as through the magic of hardlinking, hundreds of clones would take up about as much room on your disk as a single one.

Fast-forward to today

Git is a lot more complicated these days, but the basic concepts are the same. You still have blobs, trees, commits, and they are all still stored internally as hashes. Under the hood, git has developed quite a bit over the past decade to make it more efficient to store and retrieve millions and tens of millions of repository objects. Most of them are now stored inside special pack files, which are organized rather similar to compressed video clips — formats like webm don't really store each frame in a separate image, as there is usually very little difference between any two adjacent frames. It makes much more sense to store just the difference (“delta”) between two still images until you come to a designated “key frame”.

Similarly, when generating pack files, git will try to calculate the deltas between objects and only store their incremental differences — at least until it decides that it's time to start from a new “key frame” just so checking out a tag from a year ago doesn't require replaying a year worth of diffs. At the same time, there has been a lot of work to make the act of pushing/pulling objects more efficient. When someone sends you a pull request and you want to review their changes, you don't want to download their entire tree. Your git client and the remote git server compare what objects they already have on each end, with the goal to send you just the objects that you are lacking.

Optimizing public forks

If you look at the GitHub links above, check out how many forks torvalds/linux has on that hosting service. Right now, that number says “41.1k”. With the best kinds of optimizations in place, a bare linux.git repository takes up roughtly 3 GB on disk. Doing quick math, if each one of these 41.1k forks were completely standalone, that would require about 125 TB of disk storage. Throw in a few hundred terabytes for all the forks of Chromium, Android, and Gecko, and soon you're talking Real Large Numbers. Which is why nobody actually does it this way.

Remember how I said that git forks were designed to be extremely efficient and reuse the objects between clones? This is how forks are actually organized on GitHub (and git.kernel.org, for that matter), except it's a bit more complicated these days than simply hardlinking the contents of .git/objects around.

On git.kernel.org side of things we store the objects from all forks of linux.git in a single “object storage” repository (see https://pypi.org/project/grokmirror/ for the gory details). This has many positive side-effects:

  • all of git.kernel.org, with its hundreds of linux.git forks takes up just 30G of disk space
  • when Linus merges his usual set of pull requests and performs “git push”, he only has to send a very small subset of those objects, because we probably already have most of them
  • similarly, when maintainers pull, rebase, and push their own forks, they don't have to send any of the objects back to us, as we already have them

Object sharing allows to greatly improve not only the backend infrastructure on our end, but also the experience of git's end-users who directly benefit from not having to push around nearly as many bits.

The dark side of object sharing

With all the benefits of object sharing comes one important downside — namely, you can access any shared object through any of the forks. So, if you fork linux.git and push your own commit into it, any of the 41.1k forks will have access to the objects referenced by your commit. If you know the hash of that object, and if the web ui allows to access arbitrary repository objects by their hash, you can even view and link to it from any of the forks, making it look as if that object is actually part of that particular repository (which is how we get the links at the start of this article).

So, why can't GitHub (or git.kernel.org) prevent this from happening? Remember when I said that a git repository is like a movie full of adjacent still frames? When you look at a scene in a movie, it is very easy for you to identify all objects in any given still frame — there is a street, a car, and a person. However, if I show you a picture of a car and ask you “does this car show up in this movie,” the only way you can answer this question is by watching the entire thing from the beginning to the end, carefully scrutinizing every shot.

In just the same way, to check if a blob from the shared repository actually belongs in a fork, git has to look at all that repository's tips and work its way backwards, commit by commit, to see if any of the tree objects reference that particular blob. Needless to say, this is an extremely expensive operation, which, if enabled, would allow anyone to easily DoS a git server with only a handful of requests.

This may change in the future, though. For example, if you access a commit that is not part of a repository, GitHub will now show you a warning message:

Looking up “does this commit belong in this repository” used to be a very expensive operation, too, until git learned to generate commit graphs (see man git-commit-graph). It is possible that at some point in the future a similar feature will land that will make it easy to perform a similar check for the blob, which will allow GitHub to show a similar warning when someone accesses shared blobs by their hash from the wrong repo.

Why this isn't a security bug

Just because an object is part of the shared storage doesn't really have any impact on the forks. When you perform a git-aware operation like “git clone” or “git pull,” git-daemon will only send the objects actually belonging to that repository. Furthermore, your git client deliberately doesn't trust the remote to send the right stuff, so it will perform its own connectivity checks before accepting anything from the server.

If you're extra paranoid, you're encouraged to set receive.fsckObjects for some additional protection against in-flight object corruption, and if you're really serious about securing your repositories, then you should set up and use git object signing:

This is, incidentally, also how you would be able to verify whether commits were made by the actual Linus Torvalds or merely by someone pretending to be him.

Parting words

This neither proves nor disproves the identity of “Satoshi.” However, given Linus's widely known negative opinions of C++, it's probably not very likely that it's the language he'd pick to write some proof of concept code.

 
Read more...

from metan's blog

Unfortunately FOSDEM is going to be virtual again this year, but that does not stop us from organizing the testing and automation devroom. Have a look at our CfP and if you have something interesting to present go ahead and fill in a submission!

 
Read more...

from nmenon

One of the cool things with kernel.org is the fact that we can rotate maintainership depending on workload. So, https://git.kernel.org/pub/scm/linux/kernel/git/nmenon/linux.git/ is now my personal tree and we have picked up https://git.kernel.org/pub/scm/linux/kernel/git/ti/linux.git/ as a co-maintained TI tree that Vignesh and I rotate responsibilities with Tony Lindgren and Tero in backup.

Thanks to Konstantin and Stephen in making this happen.!

NOTE: No change in Tony's tree @ https://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap.git/

 
Read more...

from Konstantin Ryabitsev

This is the second installment in the series where we're looking at using the public-inbox lei tool for interacting with remote mailing list archives such as lore.kernel.org. In the previous article we looked at delivering your search results locally, and today let's look at doing the same, but with remote IMAP folders. For feedback, send a follow-up to this message on the workflows list:

For our example query today, we'll do some stargazing. The following will show us all mail sent by Linus Torvalds:

f:torvalds AND rt:1.month.ago..

I'm mostly using it because it's short, but you may want to use something similar if you have co-maintainer duties and want to automatically receive a copy of all mail sent by your fellow subsystem maintainers.

Note on saving credentials

When accessing IMAP folders, lei will require a username and password. Unless you really like typing them in manually every time you run lei up, you will probably want to have them cached on your local system. Lei will defer to git-credential-helper for this purpose, so if you haven't already set this up, you will want to do that now.

The two commonly used credential storage backends on Linux are “libsecret” and “store”:

  • libsecret is the preferred mechanism, as it will work with your Desktop Environment's keyring manager to store the credentials in a relatively safe fashion (encrypted at rest).

  • store should only be used if you don't have any other option, as it will record the credentials without any kind of encryption in the ~/.git-credentials file. However, if nothing else is working for you and you are fairly confident in the security of your system, it's something you can use.

Simply run the following command to configure the credential helper globally for your environment:

git config --global credential.helper libsecret

For more in-depth information about this topic, see man git-credential.

Getting your IMAP server ready

Before you start, you should get some information about your IMAP server, such as your login information. For my examples, I'm going to use Gmail, Migadu, and a generic Dovecot IMAP server installation, which should hopefully cover enough ground to be useful for the vast majority of cases.

What you will need beforehand:

  • the IMAP server hostname and port (if it's not 993)
  • the IMAP username
  • the IMAP password

It will also help to know the folder hierarchy. Some IMAP servers create all subfolders below INBOX, while others don't really care.

Generic Dovecot

We happen to be running Dovecot on mail.codeaurora.org, so I'm going to use it as my “generic Dovecot” system and run the following command:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://mail.codeaurora.org/INBOX/torvalds \
  <<< 'f:torvalds AND rt:1.month.ago..'

The <<< bit above is a Bash-ism, so if you're using a different shell, you can use the POSIX-compliant heredoc format instead:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://mail.codeaurora.org/INBOX/torvalds <<EOF
f:torvalds AND rt:1.month.ago..
EOF

The first time you run it, you should get a username: and password: prompt, but after that the credentials should be cached and no longer required on each repeated access to the same imaps server.

NOTE: some IMAP servers use the dot . instead of the slash / for indicating folder hierarchy, so if INBOX/torvalds is not working for you, try INBOX.torvalds instead.

Refreshing and subscribing to IMAP folders

If the above command succeeded, then you should be able to view the IMAP folder in your mail client. If you cannot see torvalds in your list of available folders, then you may need to refresh and/or subscribe to the newly created folder. The process will be different for every mail client, but it shouldn't be too hard to find.

The same with Migadu

If you have a linux.dev account (see https://korg.docs.kernel.org/linuxdev.html), then you probably already know that we ask you not to use your account for subscribing to busy mailing lists. This is due to Migadu imposing soft limits on how much incoming email is allowed for each hosted domain — so using lei + IMAP is an excellent alternative.

To set this up with your linux.dev account (or any other account hosted on Migadu), use the following command:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://imap.migadu.com/lei/torvalds \
  <<< 'f:torvalds AND rt:1.month.ago..'

Again, you will need to subscribe to the new lei/torvalds folder to see it in your mail client.

The same with Gmail

If you are a Gmail user and aren't already using IMAP, then you will need to jump through a few additional hoops before you are able to get going. Google is attempting to enhance the security of your account by restricting how much can be done with just your Google username and password, so services like IMAP are not available without setting up a special “app password” that can only be used for mail access.

Enabling app passwords requires that you first enable 2-factor authentication, and then generate a random app password to use with IMAP. Please follow the process described in the following Google document: https://support.google.com/mail/answer/185833

Once you have the app password for use with IMAP, you can use lei and imaps just like with any other IMAP server:

lei q -I https://lore.kernel.org/all/ -d mid \
  -o imaps://imap.gmail.com/lei/torvalds \
  <<< 'f:torvalds AND rt:1.month.ago..'

It requires a browser page reload for the folder to show up in your Gmail web UI.

Automating lei up runs

If you're setting up IMAP access, then you probably want IMAP updates to happen behind the scenes without your direct involvement. All you need to do is periodically run lei up --all (plus -q if you don't want non-critical output).

If you're just getting started, then you can set up a simple screen session with a watch command at a 10-minute interval, like so:

watch -n 600 lei up --all

You can then detach from the screen terminal and let that command continue behind the scenes. The main problem with this approach is that it won't survive a system reboot, so if everything is working well and you want to make the command a bit more permanent, you can set up a systemd user timer.

Here's the service file to put in ~/.config/systemd/user/lei-up-all.service:

[Unit]
Description=lei up --all service
ConditionPathExists=%h/.local/share/lei

[Service]
Type=oneshot
ExecStart=/usr/bin/lei up --all -q

[Install]
WantedBy=mail.target

And the timer file to put in ~/.config/systemd/user/lei-up-all.timer:

[Unit]
Description=lei up --all timer
ConditionPathExists=%h/.local/share/lei

[Timer]
OnUnitInactiveSec=10m

[Install]
WantedBy=default.target

Enable the timer:

systemctl --user enable --now lei-up-all.timer

You can use journalctl -xn to view the latest journal messages and make sure that the timer is running well.

CAUTION: user timers only run when the user is logged in. This is not actually that bad, as your keyring is not going to be unlocked unless you are logged into the desktop session. If you want to run lei up as a background process on some server, you should set up a system-level timer and use a different git-credential mechanism (e.g. store) — and you probably shouldn't do this on a shared system where you have to worry about your account credentials being stolen.

Coming up next

In the next installment we'll look at some other part of lei and public-inbox... I haven't yet decided which. :)

 
Read more...

from Konstantin Ryabitsev

I am going to post a series of articles about public inbox's new lei tool (stands for “local email interface”, but is clearly a “lorelei” joke :)). In addition to being posted on the blog, it is also available on the workflows mailing list, so if you want to reply with a follow up, see this link:

What's the problem?

One of kernel developers' perennial complaints is that they just get Too Much Damn Email. Nobody in their right mind subscribes to “the LKML” (linux-kernel@vger.kernel.org) because it acts as a dumping ground for all email and the resulting firehose of patches and rants is completely impossible for a sane human being to follow.

For this reason, actual Linux development tends to happen on separate mailing lists dedicated to each particular subsystem. In turn, this has several negative side-effects:

  1. Developers working across multiple subsystems end up needing to subscribe to many different mailing lists in order to stay aware of what is happening in each area of the kernel.

  2. Contributors submitting patches find it increasingly difficult to know where to send their work, especially if their patches touch many different subsystems.

The get_maintainer.pl script is an attempt to solve the problem #2, and will look at the diff contents in order to suggest the list of recipients for each submitted patch. However, the submitter needs to be both aware of this script and know how to properly configure it in order to correctly use it with git-send-email.

Further complicating the matter is the fact that get_maintainer.pl relies on the entries in the MAINTAINERS file. Any edits to that file must go through the regular patch submission and review process and it may take days or weeks before the updates find their way to individual contributors.

Wouldn't it be nice if contributors could just send their patches to one place, and developers could just filter out the stuff that is relevant to their subsystem and ignore the rest?

lore meets lei

Public-inbox started out as a distributed mailing list archival framework with powerful search capabilities. We were happy to adopt it for our needs when we needed a proper home for kernel mailing list archives — thus, lore.kernel.org came online.

Even though it started out as merely a list archival service, it quickly became obvious that lore could be used for a lot more. Many developers ended up using its search features to quickly locate emails of interest, which in turn raised a simple question — what if there was a way to “save a search” and have it deliver all new incoming mail matching certain parameters straight to the developers' inbox?

You can now do this with lei.

lore's search syntax

Public-inbox uses Xapian behind the scenes, which allows to narrowly tailor the keyword database to very specific needs.

For example, did you know that you can search lore.kernel.org for patches that touch specific files? Here's every patch that touched the MAINTAINERS file:

How about every patch that modifies a function that starts with floppy_:

Say you're the floppy driver maintainer and wanted to find all mail that touches drivers/block/floppy.c and modifies any function that starts with floppy_ or has “floppy” in the subject and maybe any other mail that mentions “floppy” and has the words “bug” or “regression”? And maybe limit the results to just the past month.

Here's the query:

    (dfn:drivers/block/floppy.c OR dfhh:floppy_* OR s:floppy
     OR ((nq:bug OR nq:regression) AND nq:floppy))
    AND rt:1.month.ago..

And here are the results:

Now, how about getting that straight into your mailbox, so you don't have to subscribe to the (very busy) linux-block list, if you are the floppy maintainer?

Installing lei

Lei is very new and probably isn't yet available as part of your distribution, but I hope that it will change quickly once everyone realizes how awesome it is.

I'm working on packaging lei for Fedora, so depending on when you're reading this, try dnf install lei — maybe it's already there. If it's not in Fedora proper yet, you can get it from my copr:

    dnf copr enable icon/b4
    dnf install lei

If you're not a Fedora user, just consult the INSTALL file:

Maildir or IMAP?

Lei can deliver search results either into a local maildir, or to a remote IMAP folder (or both). We'll do local maildir first and look at IMAP in a future follow-up, as it requires some preparatory work.

Getting going with lei-q

Let's take the exact query we used for the floppy drive above, and get lei to deliver entire matching threads into a local maildir folder that we can read with mutt:

    lei q -I https://lore.kernel.org/all/ -o ~/Mail/floppy \
      --threads --dedupe=mid \
      '(dfn:drivers/block/floppy.c OR dfhh:floppy_* OR s:floppy \
      OR ((nq:bug OR nq:regression) AND nq:floppy)) \
      AND rt:1.month.ago..'

Before you run it, let's understand what it's going to do:

  • -I https://lore.kernel.org/all/ will query the aggregated index that contains information about all mailing lists archived on lore.kernel.org. It doesn't matter to which list the patch was sent — if it's on lore, the query will find it.

  • -o ~/Mail/floppy will create a new Maildir folder and put the search results there. Make sure that this folder doesn't already exist, or lei will clobber anything already present there (unless you use --augment, but I haven't tested this very extensively yet, so best to start with a clean slate).

  • --threads will deliver entire threads even if the match is somewhere in the middle of the discussion. This is handy if, for example, someone says “this sounds like a bug in the floppy subsystem” somewhere in the middle of a conversation and --threads will automatically get you the entire conversation context.

  • --dedupe=mid will deduplicate results based on the message-id header. The default behaviour is to dedupe based on the body contents, but with so many lists still adding junky “sent to the foo list” footers, this tends to result in too many duplicated results. Passing --dedupe=mid is less safe (someone could sneak in a bogus message with an identical message-id and have it delivered to you instead), but more convenient. YMMV, BYOB.

  • Make sure you don't omit the final “..” in the rt: query parameter, or you will only get mail that was sent on that date, not since that date.

As always, backslashes and newlines are there just for readability — you don't need to use them.

After the command completes, you should get something similar to what is below:

    # /usr/bin/curl -Sf -s -d '' https://lore.kernel.org/all/?x=m&t=1&q=(omitted)
    # /home/user/.local/share/lei/store 0/0
    # https://lore.kernel.org/all/ 122/?
    # https://lore.kernel.org/all/ 227/227
    # 150 written to /home/user/Mail/floppy/ (227 matches)

A few things to notice here:

  1. The command actually executes a curl call and retrieves the results as an mbox file.
  2. Lei will automatically convert 1.month.ago into a precise timestamp
  3. The command wrote 150 messages into the maildir we specified

We can now view these results with mutt (or neomutt):

    neomutt -f ~/Mail/floppy

It is safe to delete mail from this folder — it will not get re-added during lei up runs, as lei keeps track of seen messages on its own.

Updating with lei-up

By default, lei -q will save your search and start keeping track of it. To see your saved searches, run:

    $ lei ls-search
    /home/user/Mail/floppy

To fetch the newest messages:

    lei up ~/Mail/floppy

You will notice that the first line of output will say that lei automatically limited the results to only those that arrived since the last time lei was invoked for this particular saved search, so you will most likely get no new messages.

As you add more queries in the future, you can update them all at once using:

    lei up --all

Editing and discarding saved searches

To edit your saved search, just run lei edit-search. This will bring up your $EDITOR with the configuration file lei uses internally:

    ; to refresh with new results, run: lei up /home/user/Mail/floppy
    ; `maxuid' and `lastresult' lines are maintained by "lei up" for optimization
    [lei]
        q = (dfn:drivers/block/floppy.c OR dfhh:floppy_* OR s:floppy OR \
            ((nq:bug OR nq:regression) AND nq:floppy)) AND rt:1.month.ago..
    [lei "q"]
        include = https://lore.kernel.org/all/
        external = 1
        local = 1
        remote = 1
        threads = 1
        dedupe = mid
        output = maildir:/home/user/Mail/floppy
    [external "/home/user/.local/share/lei/store"]
        maxuid = 4821
    [external "https://lore.kernel.org/all/"]
        lastresult = 1636129583

This lets you edit the query parameters if you want to add/remove specific keywords. I suggest you test them on lore.kernel.org first before putting them into the configuration file, just to make sure you don't end up retrieving tens of thousands of messages by mistake.

To delete a saved search, run:

    lei forget-search ~/Mail/floppy

This doesn't delete anything from ~/Mail/floppy, it just makes it impossible to run lei up to update it.

Subscribing to entire mailing lists

To subscribe to entire mailing lists, you can query based on the list-id header. For example, if you wanted to replace your individual subscriptions to linux-block and linux-scsi with a single lei command, do:

    lei q -I https://lore.kernel.org/all/ -o ~/Mail/lists --dedupe=mid \
      '(l:linux-block.vger.kernel.org OR l:linux-scsi.vger.kernel.org) AND rt:1.week.ago..'

You can always edit this to add more lists at any time.

Coming next

In the next series installment, I'll talk about how to deliver these results straight to a remote IMAP folder and how to set up a systemd timer to get newest mail automatically (if that's your thing — I prefer to run lei up manually and only when I'm ready for it).

 
Read more...

from metan's blog

As usual we had a LTP release at the end of the September. What was unusual though is the number of patches that went it, we got 483 patches, which is about +150 than the last three releases. And the number of patches did slowly grow even before that.

While it's great and I'm happy that the project is growing, there is a catch, grow like this puts additional strain on the maintainers, particularly on the patch reviewers. For me it was +120 patches reviewed during the four months period and that only counts the final versions of patches that were accepted to the repository, it's not unusual to have three or more revisions before the work is ready to be merged.

While I managed to cope with it reasonably fine the work that I had on TODO for the project was stalled. One of the things I finally want to move forward is making the runltp-ng official LTP test runner, but there is much more. So the obvious question is how to make things better and one of the things we came up was automation.

What we implemented for LTP is 'make check' that runs different tools on the test source code that is supposed to be used before patch is sent for a review. For C code we use the well known checkpatch.pl and custom sparse based checker to identify most common problems. The tooling is set up automatically when you call 'make check' for a first time and we tried to make it as effortless as possible, so that there is no reason not to use during the development. We also use checkbashism.pl for shell code and hopefully the number of checks will grow over the time. Hopefully this should eliminate on average at least one revision for a patchset which would be hundreds of patches during our development cycle.

Ideally this will fix the problem for a while and we will make more effective use of our resources, but eventually we will get to a point where more maintainers and reviewers are needed, which is problem that is hard to solve without your help.

 
Read more...

from Konstantin Ryabitsev

Linux development depends on the ability to send and receive emails. Unfortunately, it is common for corporate gateways to post-process both outgoing and incoming messages with the purposes of adding lengthy legal disclaimers or performing anti-phishing link quarantines, both of which interferes with regular patch flow.

While it is possible to configure free options like GMail to work well with sending and receiving patches, Google services may not be available in all geographical locales — or there may be other reasons why someone may prefer not to have a gmail.com address.

For this reason, we have partnered with Migadu to provide a mail hosting service under the linux.dev domain. If you're a Linux subsystem maintainer or reviewer and you need a mailbox to do your work, we should be able to help you out.

We hope to expand the service to include other kernel developers in the near future.

Please see our service documentation page for full details.

 
Read more...

from Konstantin Ryabitsev

asciicast

One of the side-effects of the recent UMN Affair has been renewed scrutiny of the kernel development process that continues to rely on patches sent via email. This prompted me to revisit my end-to-end patch attestation work and get it to the point where I consider it to be both stable for day-to-day work and satisfactory from the point of view of underlying security and usability.

Goals of the project

These were the goals at the outset:

  • make it opt-in and don't interfere with existing tooling and workflows
  • be as behind-the-scenes and non-intrusive as possible
  • be simple and easy to understand, explain, and audit

I believe the proposed solution hits all of these points:

  • the implementation is very similar to DKIM and uses email headers for cryptographic attestation of all relevant content (“From:” and “Subject:” headers, plus the message body). Any existing tooling will simply ignore the unrecognized header.
  • cryptographic signing is done via a git hook invoked automatically by git-send-email (sendemail-validate), so it only needs to be set up once and doesn't require remembering to do any extra steps
  • the library doing the signing is only a few hundred lines of Python code and reuses the DKIM standard for most of its logic

Introducing patatt

The library is called “patatt” (for Patch Attestation, obviously), and can be installed from PyPi:

  • pip install --user patatt

It only requires PyNaCl (Python libsodium bindings), git, and gnupg (if signing with a PGP key). The detailed principles of operation are described on the PyPi project page, so I will not duplicate them here.

The screencast linked above shows patatt in action from the point of view of a regular patch contributor.

If you have an hour or so, you can also watch my presentation to the Digital Identity Attestation WG:

Youtube video

Support in b4

Patatt is fully supported starting with version 0.7.0 of b4 — here it is in action verifying a patch from Greg Kroah-Hartman:

$ b4 am 20210527101426.3283214-1-gregkh@linuxfoundation.org
[...]
---
  ✓ [PATCH] USB: gr_udc: remove dentry storage for debugfs file
  ---
  ✓ Signed: openpgp/gregkh@linuxfoundation.org
  ✓ Signed: DKIM/linuxfoundation.org
---
Total patches: 1
---
[...]

As you see above, b4 verified that the DKIM header was valid and that the PGP signature from Greg Kroah-Hartman passed as well, giving double assurance that the message was not modified between leaving Greg's computer and being checked on the end-system of the person retrieving the patch.

Keyring management

Patatt (and b4) also introduce the idea of tracking contributor public keys in the git repository itself. It may sound silly — how can the repository itself be a source of trusted keys? However, it actually makes a lot of sense and isn't any worse than any other currently used public key distribution mechanism:

  • git is already decentralized and can be mirrored to multiple locations, avoiding any single points of failure
  • all contents are already versioned and key additions/removals can be audited and “git blame’d”
  • git commits themselves can be cryptographically signed, which allows a small subset of developers to act as “trusted introducers” to many other contributors (mimicking the “keysigning” process)

Contributor public keys can be added either to the main branch itself, along with the project codebase (perhaps in the .keys toplevel subdirectory), or it can be managed in a dedicated ref, such as refs/meta/keyring). The latter can be especially useful for large projects where patches are collected by subsystem maintainers and then submitted as pull requests for inclusion into the mainline repository. Keeping the keyring in its own ref assures that it stays out of the way of regular development work but is still centrally managed and tracked.

Further work

I am hoping that people will now start using cryptographic attestation for the patches they send, however I certainly can't force anyone's hand. If you are a kernel subsystem maintainer or a core developer of some other project that relies on mailed-in patches for the submission and code review process, I hope that you will give this a try.

If you have any comments, concerns, or improvement suggestions, please reach out to the tools list.

 
Read more...

from metan's blog

We have reached an important milestone with latest LTP release – the amount of testcases written in the new test library finally outnumbers the amount of old library tests. Which is nice opportunity for a small celebration and also to look back a bit into a history and try to summarize what has happened over the last 10 years in LTP.

I've joined LTP development a bit more than 10 years ago in 2009. At that point we were really struggling with the basics. The build system was collection of random Makefiles and the build often failed for very random reasons. The were pieces of shell code embedded in Makefiles for instance to check for devel libraries, manually written shell loops over directories that prevented parallel build, and all kind of ugly mess like that. This has changed and at the end of 2009 as the build system was rewritten, with that LTP supported proper parallel build, started to use autoconf for feature checks, etc. We also switched from CVS to GIT at the end of the 2009, which was huge improvement as well.

However that was only a start, LTP was easier to build, git was nicer to use, but we still had tests that were mostly failing and fair amount of the tests were producing nothing but noise. There were also tests that didn't produce real results and always passed but it's really hard to count these unless you review the code carefully one testcase at a time, which is part of what we still do even after ten years of work.

From that point on it took us a few years to clear the worst parts and to deal with most of the troublemakers and the results from LTP were gradually getting greener and more stable as well. We are far from being bugless, there are still parts covered in dust that are waiting for attention, but we are getting there. For instance in this release we finally got a nice cgroup test library that simplifies cgroup testcases and we should fix rest of the cgroup tests ideally before the next one. Also I'm quite happy that the manpower put into LTP development slowly increases, however compared to the efforts put into the kernel development the situation is still dire. I used to tell people that the amount of work put into Linux automated testing is a bad joke back then. These days it's much better but still hardly optimal as we struggle to keep up with covering the newly introduced kernel features.

At the start I've mentioned new test library so I should explain how we came to this and why it's superior to what we had previously. First of all there was a test library in LTP that could be traced back to SGI and was released under GPL more than 20 years ago, it's probably even older than that though. The main problems with the library was that it was cumbersome to use. There were some API reporting functions, but these were not thread safe nor could be used in child processes. You had to propagate test results manually in these two cases which was prone to errors. Even worse since the test implemented the main() function you had to return the overall result manually as well and forgetting to do so was one of the common mistakes. At a point where most of the broken tests were finally fixed I had a bit of time to invest into a future and after seven years of dealing with a common test mistakes and I had a pretty good picture of what a test library should look like and what should be avoided. Hence I've sat down and designed library that is nice and fun to use and makes tests much easier to write. This library still evolves over the time, the version introduced in 2016 wasn't as nice as it is now, but even when it was introduced it included the most important bits, for instance thread safe and automatic test result propagation or synchronization primitives that could be used even to synchronize shell code against C binary.

The old library is still present in LTP since we are a bit more than halfway done converting the tests, which is no easy task since we have still more than 600 tests to go. And as we are converting the test we are also reviewing them to ensure that the assertions are correct and the coverage isn't lacking. We still find tests that fail to report results from time to time even now, which only show how hard is to eliminate mistakes like this and why preventing them in the first place is right thing to do. And if things will go well the rest of tests should be converted in about 5 years and LTP should be finally free of the historical baggage. At that point I guess that I will throw a small celebration since that would conclude a huge task I've been working on for a decade now.

 
Read more...

from Jakub Kicinski

In Linux 5.13 ethtool gains an interface for querying IEEE and IETF statistics. This removes the need to parse vendor specific strings in ethtool -S.

Status quo

Linux has two sources of NIC statistics, the common interface stats (which show up in ifconfig, ip link, sysfs and few other places) and ethtool -S. The former – common interface stats – are a mix of basic info (packets, bytes, drops, errors in each direction) and a handful of lower level stats like CRC errors, framing errors, collisions or FIFO errors. Many of these statistics became either irrelevant (collisions) or semantically unclear (FIFO errors) in modern NICs.

This is why deployments increasingly depend on ethtool -S statistics for error tracking. ethtool -S is a free form list of stats provided by the driver. It started out as a place for drivers to report custom, implementation specific stats, but ended up also serving as a reporting place for new statistics as the networking standards developed.

Sadly there is no commonality in how vendors name their ethtool statistics. The spelling and abbreviation of IEEE stats always differ, sometimes the names chosen do not resemble the standard names at all (reportedly because vendors consider those names “too confusing” for the users). This forces infrastructure teams to maintain translations and custom per-vendor logic to scrape ethtool -S output.

What changed

Starting with Linux 5.6 Michal Kubecek has been progressively porting ethtool from ioctls to a more structured and extensible netlink interface. Thanks to that we can now augment the old commands to carry statistics. When user specifies -I | --include-statistics on the command line (or the appropriate flag in netlink) kernel will include relevant statistics in its response, e.g. for flow control:

 # ethtool -I -a eth0
 Pause parameters for eth0:
 Autonegotiate:    off
 RX:        off
 TX:        on
 Statistics:
   tx_pause_frames: 25545561
   rx_pause_frames: 0

General statistics such as PHY and MAC counters are now available via ethtool -S under standard-based names though a new --groups switch, e.g.:

 # ethtool -S eth0 --groups eth-mac
 Standard stats for eth0:
 eth-mac-FramesTransmittedOK: 902623288966
 eth-mac-FramesReceivedOK: 28727667047
 eth-mac-FrameCheckSequenceErrors: 1
 eth-mac-AlignmentErrors: 0
 eth-mac-OutOfRangeLengthField: 0

Each of the commands supports JSON-formatted output for ease of parsing (--json).

So little, so late

Admittedly the new interface is quite basic. It mostly includes statistics provided in IEEE or IETF standards, and NICs may report more interesting data. There is also no metadata about “freshness” of the stats here, or filtering built into the interface.

The starting point is based on fulfilling immediate needs. We hope the interfaces will be extended as needed. Statistics can be made arbitrarily complex, so after a couple false-starts with complex interfaces we decided to let the use cases drive the interface.

It’s also very useful to lean on the standards for clear definition of the semantics. Going forward we can work with vendors on codifying the definitions of other counters they have.

List of currently supported stats

IEEE 802.3 attributes::

 30.3.2.1.5 aSymbolErrorDuringCarrier
 30.3.1.1.2 aFramesTransmittedOK
 30.3.1.1.3 aSingleCollisionFrames
 30.3.1.1.4 aMultipleCollisionFrames
 30.3.1.1.5 aFramesReceivedOK
 30.3.1.1.6 aFrameCheckSequenceErrors
 30.3.1.1.7 aAlignmentErrors
 30.3.1.1.8 aOctetsTransmittedOK
 30.3.1.1.9 aFramesWithDeferredXmissions
 30.3.1.1.10 aLateCollisions
 30.3.1.1.11 aFramesAbortedDueToXSColls
 30.3.1.1.12 aFramesLostDueToIntMACXmitError
 30.3.1.1.13 aCarrierSenseErrors
 30.3.1.1.14 aOctetsReceivedOK
 30.3.1.1.15 aFramesLostDueToIntMACRcvError
 
 30.3.1.1.18 aMulticastFramesXmittedOK
 30.3.1.1.19 aBroadcastFramesXmittedOK
 30.3.1.1.20 aFramesWithExcessiveDeferral
 30.3.1.1.21 aMulticastFramesReceivedOK
 30.3.1.1.22 aBroadcastFramesReceivedOK
 30.3.1.1.23 aInRangeLengthErrors
 30.3.1.1.24 aOutOfRangeLengthField
 30.3.1.1.25 aFrameTooLongErrors

 30.3.3.3 aMACControlFramesTransmitted
 30.3.3.4 aMACControlFramesReceived
 30.3.3.5 aUnsupportedOpcodesReceived
 
 30.3.4.2 aPAUSEMACCtrlFramesTransmitted
 30.3.4.3 aPAUSEMACCtrlFramesReceived

 30.5.1.1.17 aFECCorrectedBlocks
 30.5.1.1.18 aFECUncorrectableBlocks

IETF RMON (RFC 2819)

 etherStatsUndersizePkts
 etherStatsOversizePkts
 etherStatsFragments
 etherStatsJabbers

 etherStatsPkts64Octets
 etherStatsPkts65to127Octets
 etherStatsPkts128to255Octets
 etherStatsPkts256to511Octets
 etherStatsPkts512to1023Octets
 etherStatsPkts1024to1518Octets
 (incl. further stats for jumbo MTUs)

Kernel side changes: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=8203c7ce4ef2840929d38b447b4ccd384727f92b

 
Read more...

from linusw

This is a retrospect of my work on the KASan Kernel Address Sanitizer for the ARM32 platform. The name is a pun on the diving decompression stop that is something you perform after going down below the surface to avoid decompression sickness.

Where It All Began

The AddressSanitizer (ASan) is a really clever invention by Google, hats off. It is one of those development tools that like git just take on the world in a short time. It was invented by some smart russians, especially Андрей Коновалов (Andrey Konovalov) and Дмитрий Вьюков (Dmitry Vyukov). It appears to be not just funded by Google but also part of a PhD thesis work.

The idea with ASan is to help ensure memory safety by intercepting all memory accesses through compiler instrumentation, and consequently providing “ASan splats” (runtime problem detections) while stressing the code. Code instrumented with ASan gets significantly slower than normal and uses up a bunch of memory for “shadowing” (I will explain this) making it a pure development tool: it is not intended to be enabled on production systems.

The way that ASan instruments code is by linking every load and store into symbols like these:

__asan_load1(unsigned long addr);
__asan_store1(unsigned long addr);
__asan_load2(unsigned long addr);
__asan_store2(unsigned long addr);
__asan_load4(unsigned long addr);
__asan_store4(unsigned long addr);
__asan_load8(unsigned long addr);
__asan_store8(unsigned long addr);
__asan_load16(unsigned long addr);
__asan_store16(unsigned long addr);

As you can guess these calls loads or stores 1, 2, 4, 8 or 16 bytes of memory in a chunk into the virtual address addr and reflects how the compiler thinks that the compiled code (usually C) thinks about these memory accesses. ASan intercepts all reads and writes by placing itself between the executing program and any memory management. The above symbols can be implemented by any runtime environment. The address will reflect what the compiler assumed runtime environment thinks about the (usually virtual) memory where the program will execute.

You will instrument your code with ASan, toss heavy test workloads on the code, and see if something breaks. If something breaks, you go and investigate the breakage to find the problem. The problem will often be one or another instance of buffer overflow, out-of-bounds array or string access, or the use of a dangling pointer such as use-after-free. These problems are a side effect of using the C programming language.

When resolving the mentioned load/store symbols, ASan instrumentation is based on shadow memory, and this is on turn based on the idea that one bit in a single byte “shadows” 8 bytes of memory, so you allocate 1/8 the amount of memory that your instrumented program will use and shadow that to some other memory using an offset calculation like this:

shadow = (address >> 3) + offset

The shadow memory is located at offset, and if our instrumented memory is N bytes then we need to allocate N/8 = N >> 3 bytes to be used as shadow memory. Notice that I say instrumented memory not code: ASan shadows not only the actual compiled code but mainly and most importantly any allocations and referenced pointers the code maintains. Also the DATA (contants) and BSS (global variables) part of the executable image are shadowed. To achive this the userspace links to a special malloc() implementation that overrides the default and manages all of this behind the scenes. One aspect of it is that malloc() will of course return chunks of memory naturally aligned to 8, so that the shadow memory will be on an even byte boundary.

ASan shadow memory The ASan shadow memory shadows the memory you're interested in inspecting for memory safety.

The helper library will allocate shadow memory as part of the runtime, and use it to shadow the code, data and runtime allocations of the program. Some will be allocated up front as the program is started, some will be allocated to shadow allocations at runtime, such as dynamically allocated buffers or anything else you malloc().

The error detection was based on the observation that a shadowing byte with each bit representing an out-of-bounds access error can have a “no error” state (0x00) and 8 error states, in total 9 states. Later on a more elaborate scheme was adopted. Values 1..7 indicate how many of the bytes are valid for access (if you malloc() just 5 bytes then it will be 5) and then there are magic bytes for different conditions.

When a piece of memory is legally allocated and accessed the corresponding bits are zeroed. Uninitialized memory is “poisoned”, i.e. set to a completely illegal value != 0. Further SLAB allocations are padded with “red zones” poisoning memory in front and behind of every legal allocation. When accessing a byte in memory, it is easy to verify that the access is legal: is the shadow byte == 0? That means all 8 bytes can be freely accessed and we can quickly proceed. Else we need a closer look. Values 1 thru 7 means bytes 1 thru 7 are valid for access (partly addressable) so we check that and any other values means uh oh.

  • 0xFA and 0xFB means we have hit a heap left/right redzone so an out-of-bounds access has happened
  • 0xFD means access to a free:ed heap region, so use-after-free
  • etc

Decoding the hex values gives a clear insight into what access violation we should be looking for.

To be fair the one bit per byte (8-to-1) mapping is not compulsory. This should be pretty obvious. Other schemes such as mapping even 32 bytes to one byte have been discussed for memory-constrained systems.

All memory access calls (such as any instance of dereferencing a pointer) and all functions in the library such as all string functions are patched to check for these conditions. It's easy when you have compiler instrumentation. We check it all. It is not very fast but it's bareable for testing.

Researchers in one camp argue that we should all be writing software (including operating systems) in the programming language Rust in order to avoid the problems ASan is trying to solve altogether. This is a good point, but rewriting large existing software such as the Linux kernel in Rust is not seen as realistic. Thus we paper over the problem instead of using the silver bullet. Hybrid approaches to using Rust in kernel development are being discussed but so far not much has materialized.

KASan Arrives

The AddressSanitizer (ASan) was written with userspace in mind, and the userspace project is very much alive.

As the mechanism used by ASan was quite simple, and the compilers were already patched to handle shadow memory, the Kernel Address Sanitizer (KASan) was a natural step. At this point (2020) the original authors seem to spend a lot of time with the kernel, so the kernel hardening project has likely outgrown the userspace counterpart.

The magic values assigned to shadow memory used by KASan is different:

  • 0xFA means the memory has been free:ed so accessing it means use-after-free.
  • 0xFB is a free:ed managed resources (devm_* accessors) in the Linux kernel.
  • 0xFC and 0xFE means we access a kmalloc() redzone indicating an out-of-bounds access.

This is why these values often occur in KASan splats. The full list of specials (not very many) can be found in mm/kasan/kasan.h.

The crucial piece to create KASan was a compiler flag to indicate where to shadow the kernel memory: when the kernel Image is linked, addresses are resolved to absolute virtual memory locations, and naturally all of these, plus the area where kernel allocates memory (SLABs) at runtime need to be shadowed. As can be seen in the kernel Makefile.kasan include, this boils down to passing the flags -fsanitize=kernel-address and -asan-mapping-offset=$(KASAN_SHADOW_OFFSET) when linking the kernel.

The kernel already had some related tools, notably kmemcheck which can detect some cases of use-after-free and references to uninitialized memory. It was based on a slower mechanism so KASan has since effectively superceded it, as kmemcheck was removed.

KASan was added to the kernel in a commit dated february 2015 along with support for the x86_64 architecture.

To exercise the kernel to find interesting bugs, the inventors were often using syzkaller, a tool similar to the older Trinity: it bombs the kernel with fuzzy system calls to try to provoke undefined and undesired behaviours yielding KASan splats and revealing vulnerabilities.

Since the kernel is the kernel we need to explicitly assign memory for shadowing. Since we are the kernel we need to do some manouvers that userspace can not do or do not need to do:

  • During early initialization of the kernel we point all shadow memory to a single page of just zeroes making all accesses seem fine until we have proper memory management set up. Userspace programs do not need this phase as “someone else” (the C standard library) handles all memory set up for them.
  • Memory areas which are just big chunks of code and data can all point to a single physical page with poison. In the virtual memory it might look like kilobytes and megabytes of poison bytes but it all points to the same physical page of 4KB.
  • We selectively de-instrument code as well: code like KASan itself, the memory manager per se, or the code that patches the kernel for ftrace, or the code that unwinds the stack pointer for a kernel splat clearly cannot be instrumented with KASan: it is part of the design of these facilities to poke around at random locations in memory, it's not a bug. Since KASan was added all of these sites in the generic kernel code have been de-instrumented, more or less.

Once these generic kernel instrumentations were in place, other architectures could be amended with KASan support, with ARM64 following x86 soon in the autumn of 2015.

Some per-architecture code, usually found in arch/xxxx/mm/kasan_init.c is needed for KASan. What this code does is to initalize the shadow memory during early initialization of the virtual memory to point to a “zero page” and later on to populate all the shadow memory with poisoned shadow pages.

The shadow memory is special and needs to be populated accessing the very lowest layer of the virtual memory abstraction: we manipulate the page tables from the top to bottom: pgd, p4d, pud, pmd, pte to make sure that the $(KASAN_SHADOW_OFFSET) points to memory that has valid page table entries.

We need to use the kernel memblock early memory management to set up memory to hold the page tables themselves in some cases. The memblock memory manager also provide us with a list of all the kernel RAM: we loop over it using for_each_mem_range() and populate the shadow memory for each range. As mentioned we first point all shadows to a zero page, and later on to proper KASan shadow memory, and then KASan kicks into action.

A special case happens when moving from using the “zero page” KASan memory to proper shadow memory: we would risk running kernel threads into partially initialized shadow memory and pull the ground out under ourselves. Not good. Therefore the global page table for the entire kernel (the one that has all shadow memory pointing to a zero page) is copied and used during this phase. It is then replaced, finally, with the proper KASan-instrumented page table with pointers to the shadow memory in a single atomic operation.

Further all optimized memory manipulation functions from the standard library need to be patched as these often have assembly-optimized versions in the architecture. This concerns memcpy(), memmove() and memset() especially. While the quick optimized versions are nice for production systems, we replace these with open-coded variants that do proper memory accesses in C and therefore will be intercepted by KASan.

All architectures follow this pattern, more or less. ARM64 supports hardware tags, which essentially means that the architecture supports hardware acceleration of KASan. As this is pretty fast, there is a discussion about using KASan even on production systems to capture problems seen very seldom.

KASan on ARM32

Then there was the attempt to provide KASan for ARM32.

The very first posting of KASan in 2014 was actually targeting x86 and ARM32 and was already working-kind-of-prototype-ish on ARM32. This did not proceed. The main reason was that when using modules, these are loaded into a designated virtual memory area rather than the kernel “vmalloc area” which is the main area used for memory allocations and what most architectures use. So when trying to use loadable modules the code would crash as this RAM was not shadowed.

The developers tried to create the option to move modules into the vmalloc area and enable this by default when using KASan to work around this limitation.

The special module area is however used for special reasons. Since it was placed in close proximity to the main kernel TEXT segment, the code could be accessed using short jumps rather than long jumps: no need to load the whole 32-bit program counter anew whenever a piece of code loaded from a module was accessed. It made code in modules as quick as normal compiled-in kernel code +/– cache effects. This provided serious performance benefits.

As a result KASan support for ARM was dropped from the initial KASan proposal and the scope was limited to x86, then followed by ARM64. “We will look into this later”.

In the spring of 2015 I started looking into KASan and was testing the patches on ARM64 for Linaro. In june I tried to get KASan working on ARM32. Andrey Ryabinin pointed out that he actually had KASan running on ARM32. After some iterations we got it working on some ARM32 platforms and I was successfully stressing it a bit using the Trinity syscall fuzzer. This version solved the problem of shadowing the loadable modules by simply shadowing all that memory as well.

The central problem with running KASan on a 32-bit platform as opposed to a 64-bit platform was that the simplest approach used up 1/8 of the whole address space which was not a problem for 64-bit platforms that have ample virtual address space available. (Notice that the amount of physical memory doesn't really matter, as KASan will use the trick to just point large chunks of virtual memory to a single physical poison page.) On 32-bit platforms this approach ate our limited address space for lunch.

We were setting aside several static assigned allocations in the virtual address space, so we needed to make sure that we only shadow the addresses actually used by the kernel. We would not need to shadow the addresses used by userspace and the shadow memory virtual range requirement could thus be shrunk from 512 MB to 130 MB for the traditional 3/1 GB kernel/userspace virtual address split used on ARM32. (To understand this split, read my article How the ARM32 Kernel Starts which tries to tell the story.)

Sleeping Beauty

This more fine-grained approach to assigning shadow memory would create some devil-in-the-details bugs that will not come out if you shadow the whole virtual address space, as the 64-bit platforms do.

A random access to some memory that should not be poked (and thus lacking shadow memory) will lead to a crash. While QEMU and some hardware was certainly working, there were some real hardware platforms that just would not boot. Things were getting tedious.

KASan for ARM32 development ground to a halt because we were unable to hash out the bugs. The initial patches from Andrey started trading hands and these out-of-tree patches were used by some vendors to test code on some hardware.

Personally, I had other assignments and could not take over and develop KASan at this point. I'm not saying that I was necessarily a good candidate at the time either, I was just testing and tinkering with KASan because ARM32 vendors had a generic interest in the technology.

As a result, KASan for ARM32 was pending out-of-tree for almost 5 years. In 2017 Abbot Liu was working on it and fixed up the support for LPAE (large physical address extension) and in 2019 Florian Fainelli picked up where Abbot left off.

Some things were getting fixed along the road, but what was needed was some focused attention and these people had other things on their plate as well.

Finally Fixing the Bugs

In April 2020 I decided to pick up the patches and have a go at it. I sloppily named my first iteration “v2” while it was something like v7.

I quickly got support from two key people: Florian Fainelli and Ard Biesheuvel. Florian had some systems with the same odd behaviour of just not working as my pet Qualcomm APQ8060 DragonBoard that I had been using all along for testing. Ard was using the patches for developing and debugging things like EFI and KASLR.

During successive iterations we managed to find and patch the remaining bugs, one by one:

  • A hard-coded bitmask assuming thread size order to be 1 (4096 bytes) on ARMv4 and ARMv5 silicon made the kernel crash when entering userspace. KASan increases the thread order so that there would be space for redzones before and after allocations, so it needed more space. After reading assembly one line at the time I finally figured this out and patched it.
  • The code was switching MMU table by simply altering the TTBR0 register. This worked in some machines, especially ARMv7 silicon, but the right way to do it was to use the per-CPU macro cpu_switch_mm() which looks intuitive but is an ARM32-ism which is why the original KASan authors didn't know about it. This macro accounts for tiny differences between different ARM cores, some even custom to certain vendors.
  • Something fishy was going on with the attached device tree. It turns out, after much debugging, that the attached devicetree could end up in memory that was outside of the kernel 1:1 physical-to-virtual mapping. The page table entries that would have translated the physical memory area where the device tree was stored was wiped clean yielding a page fault. The problem was not caused by KASan per se: it was a result of the kernel getting over a certain size, and all the instrumentation added to the kernel makes it bigger to the point that it revealed the bug. We were en route to fix a bug related to big compressed kernel images. I developed debugging code specifically to find this bug and then made a patch for this making sure not to wipe that part of the mapping. (This post gives a detailed explanation of the problem.) Ard quickly came up with a better fix: let's move the device tree to determined place in the fixed mappings and handle it as if it was a ROM.

These were the major roadblocks. Fixing these bugs created new bugs which we also fixed. Ard and Florian mopped up the fallout.

In the middle of development, five level page tables were introduced and Mike Rapoport made some generic improvements and cleanup to the core memory management code, so I had to account for these changes as well, effectively rewriting the KASan ARM32 shadow memory initialization code. At one point I also broke the LPAE support and had to repair it.

Eventually the v16 patch set was finalized in october 2020 and submitted to Russell Kings patch tracker and he merged it for Linux v5.11-rc1.

Retrospect

After the fact three things came out nice in the design of KASan for ARM32:

  • We do not shadow or intercept highmem allocations, which is nice because we want to get rid of highmem altogether.
  • We do not shadow the userspace memory, which is nice because we want to move userspace to its own address space altogether.
  • Personally I finally got a detailed idea of how the ARM32 kernel decompresses and starts, and the abstract concepts of highmem, lowmem, and the rest of those wild animals. I have written three different articles on this blog as a result, with ideas for even more of them. By explaining how things work to others I realize what I can't explain and as a result I go and research it.

Andrey and Dmitry has since worked on not just ASan and KASan but also on what was intially called the KernelThreadSanitizer (KTSAN) but which was eventually merged under the name KernelConcurrencySanitizer (KCSAN). The idea is again to use shadow memory, but now for concurrency debugging at runtime. I do not know more than this.

 
Read more...

from Jakub Kicinski

Recent months saw a lot of changes in the venerable NAPI polling mechanism, which warrants a write up.

NAPI primer

NAPI (New API) is so old it could buy alcohol in the US – which is why I will not attempt a full history lesson. It is, however, worth clearing up one major misconception. The basic flow of NAPI operation does not involve any explicit processing delays.

Before we even talk about how NAPI works, however, we need a sidebar on software interrupts.

Software interrupts (softirq) or bottom halves are a kernel concept which helps decrease interrupt service latency. Because normal interrupts don't nest in Linux, the system can't service any new interrupt while it's already processing one. Therefore doing a lot of work directly in an IRQ handler is a bad idea. softirqs are a form of processing which allows the IRQ handler to schedule a function to run as soon as IRQ handler exits. This adds a tier of “low latency processing” which does not block hardware interrupts. If software interrupts start consuming a lot of cycles, however, kernel will wake up a ksoftirq thread to take over the I/O portion of the processing. This helps back-pressure the I/O, and makes sure random threads don't get their scheduler slice depleted by softirq work.

Now that we understand softirqs, this is what NAPI does:

  1. Device interrupt fires
  2. Interrupt handler masks the individual NIC IRQ which has fired (modern NICs mask their IRQs automatically)
  3. Interrupt handler “schedules” NAPI in softirq
  4. Interrupt handler exits
  5. softirq runs NAPI callback immediately (or less often in ksoftirqd)
  6. At the end of processing NAPI re-enables the appropriate NIC IRQ again

As you can see there is no explicit delay from IRQ firing to NAPI, or extra batching, or re-polling built in.

Problem

NAPI was designed several years before Intel released its first multi-core CPU. Today systems have tens of CPUs and all of the cores can have dedicated networking queues. Experiments show that separating network processing from application processing yields better application performance. That said manual tuning and CPU allocation for every workload is tedious and often not worth the effort.

In terms of raw compute throughput having many cores service interrupts means more interrupts (less batching) and more cache pollution. Interrupts are also bad for application latency. Application workers are periodically stopped to service networking traffic. It would be much better to let the application finish its calculations and then service I/O only once it needs more data.

Last but not least NAPI semi-randomly gets kicked out into the ksoftirqd thread which degrades the network latency.

Busy polling

Busy polling is a kernel feature which was originally intended for low latency processing. Whenever an application was out of work it could check if the NIC has any packets to service thus circumventing the interrupt coalescing delay.

Recent work by Bjorn Topel reused the concept to avoid application interruptions altogether. An application can now make a “promise” to the kernel that it will periodically check for new packets itself (kernel sets a timer/alarm to make sure application doesn't break that promise.) The application is expected to use busy polling to process packets, replacing the interrupt driven parts of NAPI.

For example the usual timeline of NAPI processing would look something like:

EVENTS            irq coalescing delay (e.g. 50us)
                  <--->
packet arrival | p  p   ppp   p   pp  p  p  p  pp   p  ppp  p
           IRQ |       X      X        X      X     X     X    
-----------------------------------------------------------------
CPU USE
          NAPI |       NN     NNN      N      N     NN    N
   application |AAAAAAA  AAAAA   AAA AA AAAAAA AAAAA   AA  AAA
                < process req 1    > <process req 2     > <proc..

[Forgive the rough diagram, one space is 10us, assume app needs 150us per req, A is time used by app, N by NAPI.]

With new busy polling we want to achieve this:

EVENTS
packet arrival | p  p   ppp   p   pp  p  p  p  pp   p  ppp  p
           IRQ |
-----------------------------------------------------------
CPU USE
          NAPI |               NNNN                  NNN
   application |AAAAAAAAAAAAAAA^     AAAAAAAAAAAAAAAA^    AA 
                < process req 1>     <process req 2 >     <proc..

Here the application does not get interrupted. Once it's done with a request it asks the kernel to process packets. This allows the app to improve the latency by the amount of time NAPI processing would steal from request processing.

The two caveats of this approach are:

  • application processing has to be on similar granularity as NAPI processing (the typical cycle shouldn't be longer than 200us)
  • the application itself needs to be designed with CPU mapping in mind, or to put it simply the app architecture needs to follow the thread per core design – since NAPI instances are mapped to cores and there needs to be a thread responsible for polling each NAPI

Threaded NAPI

For applications which don't want to take on the polling challenge a new “threaded” NAPI processing option was added (after years of poking from many teams).

Unlike normal NAPI which relies on the built-in softirq processing, threaded NAPI queues have their own threads which always do the processing. Conceptually it's quite similar to the ksoftirq thread, but:

  • it never does processing “in-line” right after hardware IRQ, it always wakes up the thread
  • it only processes NAPI, not work from other subsystems
  • there is a thread per NAPI (NIC queue pair), rather than thread per core

The main advantage of threaded NAPI is that the network processing load is visible to the CPU scheduler, allowing it to make better choices. In tests performed by Google NAPI threads were explicitly pinned to cores but the application threads were not.

TAPI (work in progress)

The main disadvantage of threaded NAPI is that according to my tests it in fact requires explicit CPU allocation, unless the system is relatively idle, otherwise NAPI threads suffer latencies similar to ksoftirq latencies.

The idea behind “TAPI” is to automatically pack and rebalance multiple instances of NAPI to each thread. The hope is that each thread reaches high enough CPU consumption to get a CPU core all to itself from the scheduler. Rather than having 3 threaded NAPI workers at 30% CPU each, TAPI would have one at 90% which services its 3 instances in a round robin fashion. The automatic packing and balancing should therefore remove the need to manually allocate CPU cores to networking. This mode of operation is inspired by Linux workqueues, but with higher locality and latency guarantees.

Unfortunately, due to upstream emergencies after initial promising results the TAPI work has been on hold for the last 2 months.

 
Read more...