people.kernel.org

Reader

Read the latest posts from people.kernel.org.

from Benson Leung

tl;dr: There are 6, it's unfortunately very confusing to the end user.

Classic USB from the 1.1, 2.0, to 3.0 generations using USB-A and USB-B connectors have a really nice property in that cables were directional and plugs and receptacles were physically distinct to specify a different capability. A USB 3.0 capable USB-B plug was physically larger than a 2.0 plug and would not fit into a USB 2.0-only receptacle. For the end user, this meant that as long as they have a cable that would physically connect to both the host and the device, the system would function properly, as there is only ever one kind of cable that goes from one A plug to a particular flavor of B plug.

Does the same hold for USB-C™?

Sadly, the answer is no. Cables with a USB-C plug on both ends (C-to-C), hitherto referred to as “USB-C cables”, come in several varieties. Here they are, current as of the USB Type-C™ Specification 1.4 on June 2019:

  1. USB 2.0 rated at 3A
  2. USB 2.0 rated at 5A
  3. USB 3.2 Gen 1 (5gbps) rated at 3A
  4. USB 3.2 Gen 1 (5gbps) rated at 5A
  5. USB 3.2 Gen 2 (10gbps) rated at 3A
  6. USB 3.2 Gen 2 (10gpbs) rated at 5A

We have a matrix of 2 x 3, with 2 current rating levels (3A max current, or 5A max current), and 3 data speeds (480mbps, 5gbps, 10gpbs).

Adding a bit more detail, cables 3-6, in fact, have 10 more wires that connect end-to-end compared to the USB 2.0 ones in order to handle SuperSpeed data rates. Cables 3-6 are called “Full-Featured Type-C Cables” in the spec, and the extra wires are actually required for more than just faster data speeds.

“Full-Featured Type-C Cables” are required for the most common USB-C Alternate Mode used on PCs and many phones today, VESA DisplayPort Alternate Mode. VESA DP Alt mode requires most of the 10 extra wires present in a Full-Featured USB-C cable.

My new Pixelbook, for example, does not have a dedicated physical DP or HDMI port and relies on VESA DP Alt Mode in order to connect to any monitor. Brand new monitors and docking stations may have a USB-C receptacle in order to allow for a DisplayPort, power and USB connection to the laptop.

Suddenly, with a USB-C receptacle on both the host and the device (the monitor), and a range of 6 possible USB-C cables, the user may encounter a pitfall: They may try to use the USB 2.0 cable that came with their laptop with the display and the display doesn't work, despite the plugs fitting on both sides because 10 wires aren't there.

Why did it come to this? This problem was created because the USB-C connectors were designed to replace all of the previous USB connectors at the same time as vastly increasing what the cable could do in power, data, and display dimensions. The new connector may be and virtually impossible to plug in improperly (no USB superposition problem, no grabbing the wrong end of the cable), but sacrificed for that simplicity is the ability to intuitively know whether the system you've connected together has all of the functionality possible. The USB spec also cannot simply mandate that all USB-C cables have the maximum number of wires all the time because that would vastly increase BOM cost for cases where the cable is just used for charging primarily.

How can we fix this? Unfortunately, it's a tough problem that has to involve user education. USB-C cables are mandated by USB-IF to bear a particular logo in order to be certified:

Image

Collectively, we have to teach users that if they need DisplayPort to work, they need to find cables with the two logos on the right.

Technically, there is something that software can do to help the education problem. Cables 2-6 are required by the USB specification to include an electronic marker chip which contains vital information about the cable. The host should be able to read that eMarker, and identify what its data and power capabilities are. If the host sees that the user is attempting to use DisplayPort Alternate Mode with the wrong cable, rather than a silent failure (ie, the external display doesn't light up), the OS should tell the user via a notification they may be using the wrong cable, and educate the user about cables with the right logo.

This is something that my team is actively working on, and I hope to be able to show the kernel pieces necessary soon.

 
Read more...

from Mauro Carvalho Chehab

Having a certain number of machines here with Fedora, I started working on April, 30 with the migration of those to use Fedora’s latest version: Fedora 30.

Note: this is a re-post of a blog entry I wrote back on May, 1st: https://linuxkernel.home.blog/2019/05/01/fedora-30-installation/ with one update at the end made on Jun, 26.

First machine: a multi-monitor desktop

I started the migration on a machine with multiple monitors connected on it. Originally, when Fedora was installed on it, the GPU Kernel driver for the chipset (called DRM KMS – Kernel ModeSet) was not available yet at Fedora’s Kernel. So, Fedora installer (Anaconda) added a nomodeset option to the Kernel parameters.

As there was KMS support was just arriving upstream, I built my own Kernel on that time and removed the nomodeset option.

By the time I did the upgrade, maybe except for the rescue mode, all Kernels were using KMS.

I did the upgrade the same way I did in the past (as described here), e. g. by calling:

dnf system-upgrade --release 30 --allowerasing download
dnf system-upgrade reboot

The system-upgrade had to remove pgp-tools, with currently has a broken dependency, and eclipse. The last one was due to the fact that, on Fedora 29, I was with modular support enabled, with made it depend on a Java modular set of packages.

After booting the Kernel, I had the first problem with the upgrade: Fedora now uses BootLoaderSpec – BLS by default, converting the old grub.cfg file to the new BLS mode. Well, the conversion simply re-added the nomodeset option to all Kernels, causing it to disable the extra monitors, as X11/Wayland would need to setup the video mode via the old way. On that time, I wasn’t aware of BLS, so I just ran this command:

cd /boot/efi/EFI/fedora/ && cp grub.cfg.rpmsave grub.cfg

In order to restore the working grub.cfg file.

Later, in order to avoid further problems on Kernel upgrades, I installed grubby-deprecated, as recommended at https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault#Upgrade.2Fcompatibility_impact, and manually edited /etc/default/grub in order to comment out the line with GRUB_ENABLE_BLSCFG. I probably could just fix the BLS setup instead, but I opted to be conservative here.

After that, I worked to re-install eclipse. For that, I had to disable modular support, as eclipse depends on an ant package version that was not there yet inside Fedora modular repositories by the time I did the upgrade.

In summary, my first install didn’t went smoothly.

Second machine: a laptop

At the second machine, I ran the same dnf system-upgrade commands as did at the first machine. As this laptop had a Fedora 29 installed last month from scratch, I was expecting a better luck.

Guess what…

… it ended to be an even worse upgrade… machine crashed after boot!

Basically, systemd doesn’t want to mount a rootfs image if it doesn’t contain a valid file at /usr/lib/os-release. On Fedora 29, this is a soft link to another file inside /usr/lib/os.release.d. The specific file name depends if you installed Fedora Workstation, Fedora Server, …

During the upgrade, the directory /usr/lib/os.release.d got removed, causing the soft link to point to nowhere. Due to that, after boot, systemd crashes the machine with a “brilliant” message, saying that it was generating a rdsosreport.txt, crowded of information that one would need to copy to some place else in order to analyze. Well, as it didn’t mount the rootfs, copying it would be tricky, without network nor the usual commands found at /bin and /sbin directories.

So, instead, I just looked at the journal file, where it said that the failure was at /lib/systemd/system/initrd-switch-root.service. That basically calls systemctl, asking it to switch the rootfs to /sysroot (with is the root filesystem as listed at /etc/fstab). Well, systemctl checks if it recognizes os-release. If not, instead of mounting it, producing a warning and hoping for the best, it simply crashes the system!

In order to fix it, I had to use vi to manually create a Fedora 30 release. Thankfully, I had already a valid os-release from my first upgraded machine. So, I just manually typed it.

After that, the system booted smoothly.

Other machines

Knowing that Fedora 30 install was not trivial, I decided to go one step back, learning from my past mistakes.

So, I decided to write a small “script” with the steps to be done for the upgrade. Instead of running it as a script, you may instead run it line by line (after the set -e line). Here it is:

#/bin/bash

#should run as root

# If one runs it as a script, makes it abort on errors
set -e

dnf config-manager --set-disabled fedora-modular
dnf config-manager --set-disabled updates-modular
dnf config-manager --set-disabled updates-testing-modular
dnf distro-sync
dnf upgrade --refresh
(cd /usr/lib/ && cp $(readlink -f os-release) /tmp/os-release && rm os-release && cp /tmp/os-release os-release)
dnf system-upgrade --release 30 --allowerasing download
dnf system-upgrade reboot

Please notice that the scripts will removes os-release and copies the one from the linked file. Please check if it went well, as if the logic fails, you may end crashing your machine at the next boot.

Also, please notice that it will disable Fedora modular support. Well, I don’t need anything there, so it works pretty fine for me.

Post-install steps

Please notice that, after an upgrade, Fedora may re-enable Fedora modular. That happened to me on one machine with had Fedora 26. If you don't want to keep it enabled, you should do:

dnf config-manager --set-disabled fedora-modular
dnf config-manager --set-disabled updates-modular
dnf config-manager --set-disabled updates-testing-modular
dnf distro-sync

Results

I repeated the same procedure on several other machines, one being a Fedora Server, using the above scripts. On all, it went smoothly.

 
Read more...