These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life.
Most content on this site is licensed under the WTFPL, version 2 (details).
Questions? Praise? Blame? Feel free to contact me.
My old blog (pre-2006) is also still available.
See also my Mastodon page.
Sun | Mon | Tue | Wed | Thu | Fri | Sat |
---|---|---|---|---|---|---|
1 | 2 | 3 | ||||
4 | 5 | 6 | 7 | 8 | 9 | 10 |
11 | 12 | 13 | 14 | 15 | 16 | 17 |
18 | 19 | 20 | 21 | 22 | 23 | 24 |
25 | 26 | 27 | 28 | 29 | 30 | 31 |
(...), Arduino, AVR, BaRef, Blosxom, Book, Busy, C++, Charity, Debian, Electronics, Examination, Firefox, Flash, Framework, FreeBSD, Gnome, Hardware, Inter-Actief, IRC, JTAG, LARP, Layout, Linux, Madness, Mail, Math, MS-1013, Mutt, Nerd, Notebook, Optimization, Personal, Plugins, Protocol, QEMU, Random, Rant, Repair, S270, Sailing, Samba, Sanquin, Script, Sleep, Software, SSH, Study, Supermicro, Symbols, Tika, Travel, Trivia, USB, Windows, Work, X201, Xanthe, XBee
For a project (building a low-power LoRaWAN gateway to be solar powered) I am looking at simple and low-power linux boards. One board that I came across is the Milk-V Duo, which looks very promising. I have been playing around with it for just a few hours, and I like the board (and its SoC) very much already - for its size, price and open approach to documentation.
The board itself is a small (21x51mm) board in Raspberry Pi Pico form factor. It is very simple - essentially only a SoC and regulators. The SoC is the CV1800B by Sophgo, (a vendor I had never heard of until now, seems they were called CVITEK before). It is based on the RISC-V architecture, which is nice. It contains two RISC-V cores (one at 1Ghz and one at 700Mhz), as well as a small 8051 core for low-level tasks. The SoC has 64MB of integrated RAM.
The SoC supports the usual things - SPI, I²C, UART. There is also a CSI (camera) connector and some AI accelaration block, it seems the chip is targeted at the AI computer vision market (but I am ignoring that for my usecase). The SoC also has an ethernet controller and PHY integrated (but no ethernet connector, so you still need an external magjack to use it). My board has an SD card slot for booting, the specs suggest that there might also be a version that has on-board NAND flash instead of SD (cannot combine both, since they use the same pins).
There are two other variants - the Duo 256M with more RAM (board is identical except for 1 extra power supply, just uses a different SoC with more RAM) and the Duo S (in what looks like Rock Pi S form factor) which adds an ethernet and USB host port. I have not tested either of these and they use a different SoC series (SG200x) of the same chip vendor, so things I write might or might not be applicable to them (but the chips might actually be very similar internally, the CVx to SGx change seems to be related to the company merger, not necessarily technical differences).
The biggest (or at least most distinguishing) selling point, to me, is that both the chip and board manufacturers seem to be going for a very open approach. In particular:
Full datasheets for the SoC are available (the datasheets could be a bit more detailed, but I am under the impression that this is still a bit of a work-in-progress, not that there is a more detailed datasheet under NDA).
The tables (e.g. pinout tables) are not in the datasheet PDF, but separately distributed as a spreadsheet, which is super convenient.
For the the SG200x chips, the datasheet is created using reStructuredText (a text format a bit like markdown but more expressive), and the rst source files are available on github under a BSD license. This is really awesome: it makes contributions to the documentation very easy, and (if they structure things properly when more chips are added) could make it very easy to see what peripherals are the same or different between different chips.
Sophgo seems to be actively trying to get their chips supported by mainline linux (maybe by contributing code directly, or at least by supporting the community to do so), which should make it a lot easier to work with their chips in the future.
Other vendors often just drop a heavily customized and usually older kernel or BSP out there, sometimes updating it to newer versions but not always, and relying on other people to do the work of cleaning up the code and submitting it to linux.
The second core can be used as a microcontroller and Milk-V supports running FreeRTOS on it it, and provides an Arduino core for it (have not looked if it is any good yet). It seems the first core then remains active running Linux, providing a way to flash the secondary core through the primary core.
All this is based on fairly quick observations, so maybe things are not as open and nice and they seem to be at first glance, but it looks like something cool is going on here at least.
Other things I like about the board:
There are also some (potential) downsides that might complicate matters: - Only 64MB RAM is very limited. In practice, some RAM is used for peripherals (I think) too, the default buildroot image has something like half of the RAM available to Linux. Other images configure this differently so full RAM is available to the kernel (leaving 55M for userspace). See this forum topic for more details.
Low memory limits options - people have reported apt needs around 50M to work, which means it ends up using swap and is super slow.
The official Linux distribution from milk-v is a buildroot-built image, which means all software is built into the image directly, no package manager to install extra software afterwards.
The buildroot files are available, so it should be easy too build your own image with extra software, though I think this does mean compiling everything from source.
There does seem to be a lively community of people that are working on making other distributions work on these boards. In most cases this means building a custom kernel for this board (with milk-v/sophgo patches, often using buildroot) and combining it with existing RISC-V packages or a rootfs from these distributions. Sometimes with instructions or a script, sometimes someone just hand-edited an image to make it work.
Hopefully proper support can be added into the actual distributions as well, though a lot of distributions do not really have the machinery to create bootable images for specific boards (i.e. they only support building images for generic BIOS or EFI booting). One distribution that does have this is Armbian, but that is Debian/apt-based so probably needs more than 64MB RAM.
I have briefly tried the Alpine and Arch linux images that are available. Alpine is really tiny, but like the official buildroot image uses musl libc. This is nice and tiny, but not all software plays well with it (and in all cases I think software must be compiled specifically against musl). The main application I needed (The basicstation LoRaWAN forwarder) did not like being compiled against musl (and I did not feel like fixing that, especially since I am doubtful such changes would be merged upstream).
So I am hoping I can use the Arch image, which does use glibc and seems to run basicstation (at least it starts, I have not had the time to reallly set it up yet). Or maybe a Debian/Ubuntu/Armbian image after all - I have also ordered the 256M version (which was not in stock initially).
For an overview of various images created by the community, see this excellent overview page.
It is not entirely clear to me what bootloader is used and how the
devicetree is managed. On most single-board linux devices I know,
there is u-boot with a boot script, which can be configured to load
different devicetrees and overlays to allow configuring the hardware
(e.g. remapping pins as SPI or I²C pins). On the buildroot-image for
the Duo, I could find no evidence of any of this in /boot
, but
I did see u-boot being mentioned in some places, so maybe it is just
configured differently.
Even though the documentation is very open, some of it is a bit hard to find and spread around. Here's some places I found:
The USB datapins are available externally, but only on two pads that need pogopins or something like that to connect to them. Would have been more convenient if these had a proper pin header.
Some of the hardware setup is done with shell scripts that run on startup (for example the USB networking), some of which actually do raw register writes. This is probably something that will be fixed when kernel support improves, but can be fragile until then.
Sales are still quite limited - most of the suppliers linked from the manufacturer pages seem to be China-only, and I have not found the boards at any European webshop yet. I have now ordered from Arace Tech, a chinese webshop that ships internationally and worked well for me (except for to-be-expected long shipping times of a couple of weeks).
So, that was my first impression and thoughts. If I manage to get things running and use this board as part of my LoRaWAN gateway design, I'll probably post a followup with some more experiences. If you have used this board and have things to share, I'm happy to hear them!
On my server, I use LVM for managing partitions. I have one big "data" partition that is stored on an HDD, but for a bit more speed, I have an LVM cache volume linked to it, so commonly used data is cached on an SSD for faster read access.
Today, I wanted to resize the data volume:
# lvresize -L 300G tika/data
Unable to resize logical volumes of cache type.
Bummer. Googling for the error message showed me some helpful posts here and here that told me you have to remove the cache from the data volume, resize the data volume and then set up the cache again.
For this, they used lvconvert --uncache
, which detaches and deletes
the cache volume or cache pool completely, so you then have to recreate
the entire cache (and thus figure out how you created it in the first
place).
Trying to understand my own work from long ago, I looked through
documentation and found the lvconvert --splitcache
in
lvmcache(7), which detached a cache volume or cache pool,
but does not delete it. This means you can resize and just reattached
the cache again, which is a lot less work (and less error prone).
For an example, here is how the relevant volumes look:
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data tika Cwi-aoC--- 300.00g [data-cache_cvol] [data_corig] 2.77 13.11 0.00
[data-cache_cvol] tika Cwi-aoC--- 20.00g
[data_corig] tika owi-aoC--- 300.00g
Here, data
is a "cache" type LV that ties together the big data_corig
LV
that contains the bulk data and small data-cache_cvol
that contains the
cached data.
After detaching the cache with --splitcache
, this changes to:
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data tika -wi-ao---- 300.00g
data-cache tika -wi------- 20.00g
I think the previous data
cache LV was removed, data_corig
was renamed to
data
and data-cache_cvol
was renamed to data-cache
again.
Armed with this knowledge, here's how the ful resize works:
lvconvert --splitcache tika/data
lvresize -L 300G tika/data @hdd
lvconvert --type cache --cachevol tika/data-cache tika/data --cachemode writethrough
The last command might need some additional parameters depending on how you set
up the cache in the first place. You can view current cache parameters with
e.g. lvs -a -o +cache_mode,cache_settings,cache_policy
.
Note that all of this assumes using a cache volume an not a cache pool. I was originally using a cache pool setup, but it seems that a cache pool (which splits cache data and cache metadata into different volumes) is mostly useful if you want to split data and metadata over different PV's, which is not at all useful for me. So I switched to the cache volume approach, which needs fewer commands and volumes to set up.
I killed my cache pool setup with --uncache
before I found out about
--splitcache
, so I did not actually try --splitcache
with a cache pool, but
I think the procedure is actually pretty much identical as described above,
except that you need to replace --cachevol
with --cachepool
in the last
command.
For reference, here's what my volumes looked like when I was still using a cache pool:
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data tika Cwi-aoC--- 260.00g [data-cache] [data_corig] 99.99 19.39 0.00
[data-cache] tika Cwi---C--- 20.00g 99.99 19.39 0.00
[data-cache_cdata] tika Cwi-ao---- 20.00g
[data-cache_cmeta] tika ewi-ao---- 20.00m
[data_corig] tika owi-aoC--- 260.00g
This is a data
volume of type cache, that ties together the big data_corig
LV that contains the bulk data and a data-cache
LV of type cache-pool that
ties together the data-cache_cdata
LV with the actual cache data and
data-cache_cmeta
with the cache metadata.
Interesting, thanks for your post. --splitcache
sounds very neat but as far as I can tell the main advantage is speed (vs --uncache
). When you run lvconvert
to restore the existing cache you are only allowed to proceed if you accept that the entire existing cache contents are wiped.
Yeah, I think the end result is the same, it's just easier to use --splitcache
indeed.
A few months ago, I put up an old Atom-powered Supermicro server (SYS-5015A-PHF) again, to serve at De War to collect and display various sensor and energy data about our building.
The server turned out to have an annoying habit: every now and then it would start beeping (one continuous annoying beep), that would continue until the machine was rebooted. It happened sporadically, but kept coming back. When I used this machine before, it was located in a datacenter where nobody would care about a beep more or less (so maybe it has been beeping for years on end before I replaced the server), but now it was in a server cabinet inside our local Fablab, where there are plenty of people to become annoyed by a beeping server...
I eventually traced this back to faulty sensor readings and fixed this by disabling the faulty sensors completely in the server's IPMI unit, which will hopefully prevent the annoying beep. In this post, I'll share my steps, in case anyone needs to do the same.
At first, I noticed that there was an alarm displayed in the IPMI webinterface for one of the fans. Of course it makes sense to be notified of a faulty fan, except that the system did not have any fans connected... It did show the fan speed as 0RPM (or -2560RPM depending on where you looked) as expected, so I suspected it would start up realizing there was no fan but then sporadically seeing a bit of electrical noise on the fan speed pin, causing it to mark the fan as present and immediately as not running, triggering the alarm. I tried to fix this by shorting the fan speed detection pins to the GND pins to make it more noise-resilient.
However, a couple of weeks later, the server started beeping again. This time I
looked a bit more closely, and found that the problem was caused by too high
temperature this time. The IPMI system event log (queried using ipmi-sel
)
showed:
43 | Feb-17-2023 | 09:18:58 | CPU Temp | Temperature | Upper Non-critical - going high ; Sensor Reading = 125.00 C ; Threshold = 85.00 C
44 | Feb-17-2023 | 09:18:58 | CPU Temp | Temperature | Upper Critical - going high ; Sensor Reading = 125.00 C ; Threshold = 90.00 C
45 | Feb-17-2023 | 09:18:58 | CPU Temp | Temperature | Upper Non-recoverable - going high ; Sensor Reading = 125.00 C ; Threshold = 95.00 C
46 | Feb-17-2023 | 16:26:16 | CPU Temp | Temperature | Upper Non-recoverable - going high ; Sensor Reading = 41.00 C ; Threshold = 95.00 C
47 | Feb-17-2023 | 16:26:16 | CPU Temp | Temperature | Upper Critical - going high ; Sensor Reading = 41.00 C ; Threshold = 90.00 C
48 | Feb-17-2023 | 16:26:16 | CPU Temp | Temperature | Upper Non-critical - going high ; Sensor Reading = 41.00 C ; Threshold = 85.00 C
This is abit opaque, but the events at 9:18 show the temperature was read as 125°C - clearly indicating a faulty sensor. These are (I presume) the "asserted" events for each of the thresholds that this sensor has. Then at 16:26, the server was rebooted and the sensor read 41°C again (which I believe is still higher than realistic) and each of the thresholds emits a "deasserted" event.
Looking back, I noticed that the log showed events for both fans and both
temperature sensors, so it seemed all of these sensors were really wonky. I
could also see the incorrect temperatures clearly in the sensor data I had been
collecting from the server (using telegraf, collected using lm-sensors
from
within the linux system itself, but clearly reading from the same sensor as
IPMI):
Note that the graph above shows two sensors, while IPMI only reads two, so I am not sure what the third one is. The alarm from the IPMI log is shown clearly as a sudden jump of the temp2 purple line (jumping back down when the server was rebooted). But also note an unexplained second jump down a few hours later, and note that the next day temp1 dives down to -53°C for some reason, which also matches what IPMI reads:
$ sudo ipmitool sensor
System Temp | -53.000 | degrees C | nr | -9.000 | -7.000 | -5.000 | 75.000 | 77.000 | 79.000
CPU Temp | 27.000 | degrees C | ok | -11.000 | -8.000 | -5.000 | 85.000 | 90.000 | 95.000
CPU FAN | -2560.000 | RPM | nr | 400.000 | 585.000 | 770.000 | 29260.000 | 29815.000 | 30370.000
SYS FAN | -2560.000 | RPM | nr | 400.000 | 585.000 | 770.000 | 29260.000 | 29815.000 | 30370.000
CPU Vcore | 1.160 | Volts | ok | 0.640 | 0.664 | 0.688 | 1.344 | 1.408 | 1.472
Vichcore | 1.040 | Volts | ok | 0.808 | 0.824 | 0.840 | 1.160 | 1.176 | 1.192
+3.3VCC | 3.280 | Volts | ok | 2.816 | 2.880 | 2.944 | 3.584 | 3.648 | 3.712
VDIMM | 1.824 | Volts | ok | 1.448 | 1.480 | 1.512 | 1.960 | 1.992 | 2.024
+5 V | 5.056 | Volts | ok | 4.096 | 4.320 | 4.576 | 5.344 | 5.600 | 5.632
+12 V | 11.904 | Volts | ok | 10.368 | 10.496 | 10.752 | 12.928 | 13.056 | 13.312
+3.3VSB | 3.296 | Volts | ok | 2.816 | 2.880 | 2.944 | 3.584 | 3.648 | 3.712
VBAT | 2.912 | Volts | ok | 2.560 | 2.624 | 2.688 | 3.328 | 3.392 | 3.456
Chassis Intru | 0x0 | discrete | 0x0000| na | na | na | na | na | na
PS Status | 0x1 | discrete | 0x01ff| na | na | na | na | na | na
Note that the voltage sensors show readings that do make sense, and looking at the history, they show no sudden jumps, so those are probably still reliably (even though they are read from the same sensor chip according to lm-sensors).
It seems you can disable generation of events when a threshold is crossed, can even disable reading the sensor entirely. Hopefully this will also prevent the BMC from beeping on weird sensor values.
To disable things, I used ipmi-sensor-config
(from the freeipmi-tools
Debian package):
First I queried the current sensor configuration:
sudo ipmi-sensors-config --checkout > ipmi-sensors-config.txt
Then I edited the generated file, setting Enable_All_Event_Messages
and
Enable_Scanning_On_This_Sensor
to No
. I also had to set the hysteresis
values for the fans to None
, since the -2375 value generated by
--checkout
was refused when writing back the values in the next step.
Commited the changes with:
sudo ipmi-sensors-config --commit --filename ipmi-sensors-config.txt
I suspect that modifying Enable_All_Event_Messages
allows the sensor to be
read, but prevents the threshold from being checked and generating events
(especially since this setting seems to just clear the corresponding setting
for each available threshold, so it seems you can also use this to disable some
of the thresholds and keep some others). However, it is not entirely clear to
me if this would just prevent these events from showing up in the event log, or
if it would actually prevent the system from beeping (when does the system
beep? On any event? Specific events? This is not clear to me).
For good measure, I decided to also modify Enable_Scanning_On_This_Sensor
,
which I believe prevents the sensor from being read at all by the BMC, so that
should really prevent alarms. This also causes ipmitool sensor
to display
value and status as na
for these sensors. The sensors
command (from the
lm-sensors
package) can still read the sensor without issues, though the
values are not very useful anyway...).
Note that apparently these settings are not always persistent across reboots and powercycles, so make sure you test that. For this particular server, the settings survive across a reboot, I have not tested a hard power cycle yet.
I cannot yet tell for sure if this has fixed the problem (only applied the changes today), but I'm pretty confident that this will indeed keep the people in our Fablab happy (and if not - I'll just solder off the beeper from the motherboard, but let's hope I will not have to resort to such extreme measures...).
When sorting out some stuff today I came across an "Ecobutton". When you attach it through USB to your computer and press the button, your computer goes to sleep (at least that is the intention).
The idea is that it makes things more sustainable because you can more easily put your computer to sleep when you walk away from it. As this tweakers poster (Dutch) eloquently argues, having more plastic and electronics produced in China, shipped to Europe and sold here for €18 or so probably does not have a net positive effect on the environment or your wallet, but well, given this button found its way to me, I might as well see if I can make it do something useful.
I had previously started a project to make a "Next" button for spotify that you could carry around and would (wirelessly - with an ESP8266 inside) skip to the next song using the Spotify API whenever you pressed it. I had a basic prototype working, but then the project got stalled on figuring out an enclosure and finding sufficiently low-power addressable RGB LEDs (documentation about this is lacking, so I resorted to testing two dozen different types of LEDs and creating a website to collect specs and test results for adressable LEDs, which then ended up with the big collection of other Yak-shaving projects waiting for this magical moment where I suddenly have a lot of free time).
In any case, it seemed interesting to see if this Ecobutton could be used as poor-man's spotify next button. Not super useful, but at least now I can keep the button around knowing I can actually use it for something in the future. I also produced some useful (and not readily available) documentation about remapping keys with hwdb in the process, so it was at least not a complete waste of time... Anyway, into the technical details...
I expected this to be an USB device that advertises itself as a keyboard, and then whenever you press the button, sends the "sleep" key to put the computer to sleep.
Turns out the first part was correct, but instead of sending a sleep keypress, it sends Meta-R, ecobutton (it types each of these letters after each other), enter. Seems you need to install a tool on your pc that is executed by using the windows-key+R shortcut. Pragmatic, but quite ugly, especially given a sleep key exists... But maybe Windows does not implement the key (or maybe this tool also changes some settings for deeper sleep, at least that's what was suggested in the tweakers post linked above).
I considered I could maybe replace the firmware to make the device send whatever keystroke I wanted, but writing firmware from scratch for existing hardware is not the easiest project (even for a simple device like this). After opening the device I decided this was not a feasible route.
The (I presume) microcontroller in there is hidden in a blob, so no indication as to its type, pin connections, programming ports (if it actually has flash and is not ROM only).
I did notice some solder jumpers that I figured could influence behavior (maybe the same PCB is used for differently branded buttons), but shorting S1 or S5 did not seem to change behavior (maybe I should have unsoldered S3, but well...).
The next alternative is to remap keys on the computer. Running Linux, this should certainly be possible in a couple of dozen ways. This does need to be device-specific remapping, so my normal keyboard still works as normal, but if I can unmap all keys except for the meta key that it presses first, and map that to someting like the KEY_NEXTSONG (which is handled by spotify and/or Gnome already), that might work.
I first saw some remapping solutions for X, but those probably will not work - I'm running wayland and I prefer something more low-level. I also found cool remapping daemons (like keyd) that grab events from a keyboard and then synthesise new events on a virtual keyboard, allowing cool things like multiple layers or tapping shift and then another key to get uppercase instead of having to hold shift and the key together, but that is way more complicated than what I need here.
Then I found that udev has a thing called "hwdb", which allows putting
files in /etc/udev/hwdb.d
that match specific input devices and can
specify arbitrary scancode-to-keycode mappings for them. Exactly what
I needed - works out of the box, just drop a file into /etc
.
The challenge turned out to be to figure out how to match against my
specific keyboard identifier, what scancodes and keycodes to use, and
in general figure out how the ecosystem around this works (In short:
when plugging in a device, udev rules consults the hwdb for extra device
properties, which a udev builtin keyboard
command then uses to apply
key remappings in the kernel using an ioctl on the /dev/input/eventxx
device). In case you're wondering - this means you do not need to use
hwdb, you can also apply this from udev rules directly, but then you
need a bit more care.
I've written down everything I figured about hwdb in a post on Unix stackexchange, so I'l not repeat everything here.
Using what I had learnt, getting the button to play nice was a matter of
creating /etc/udev/hwdb.d/99-ecobutton.hwdb
containing:
evdev:input:b????v3412p7856e*
KEYBOARD_KEY_700e3=nextsong # LEFTMETA
KEYBOARD_KEY_70015=reserved # R
KEYBOARD_KEY_70008=reserved # E
KEYBOARD_KEY_70006=reserved # C
KEYBOARD_KEY_70012=reserved # O
KEYBOARD_KEY_70005=reserved # B
KEYBOARD_KEY_70018=reserved # U
KEYBOARD_KEY_70017=reserved # T
KEYBOARD_KEY_70011=reserved # N
KEYBOARD_KEY_70028=reserved # ENTER
This matches the keyboard based on its usb vendor and product id (3412:7856) and then disables all keys that are used by the button, except for the first, and remaps that to KEY_NEXTSONG.
To apply this new file, run sudo systemd-hwdb update
to recompile the
database and then replug the button to apply it (you can also re-apply
with udevadm trigger
, but it seems then Gnome does not pick up the
change, I suspect because the gnome-settings media-keys module checks
only once whether a keyboard supports media keys at all and ignores it
otherwise).
With that done, it now produces KEY_NEXTSONG events as expected:
$ sudo evtest --grab /dev/input/by-id/usb-3412_7856-event-if00
Input driver version is 1.0.1
Input device ID: bus 0x3 vendor 0x3412 product 0x7856 version 0x100
Input device name: "HID 3412:7856"
[ ... Snip more output ...]
Event: time 1675514344.256255, type 4 (EV_MSC), code 4 (MSC_SCAN), value 700e3
Event: time 1675514344.256255, type 1 (EV_KEY), code 163 (KEY_NEXTSONG), value 1
Event: time 1675514344.256255, -------------- SYN_REPORT ------------
Event: time 1675514344.264251, type 4 (EV_MSC), code 4 (MSC_SCAN), value 700e3
Event: time 1675514344.264251, type 1 (EV_KEY), code 163 (KEY_NEXTSONG), value 0
Event: time 1675514344.264251, -------------- SYN_REPORT ------------
More importantly, I can now skip annoying songs (or duplicate songs - spotify really messes this up) with a quick butttonpress!
Maybe you missed the fact that it is also possible to keep the button pressed in order to open a website. This could be used for another function by mapping a key that is only included in that link like /.
To map out the other url keys, my file now looks like this:
evdev:input:b????v3412p7856e*
KEYBOARD_KEY_700e3=nextsong # LEFTMETA
KEYBOARD_KEY_70015=reserved # R
KEYBOARD_KEY_70008=reserved # E
KEYBOARD_KEY_70006=reserved # C
KEYBOARD_KEY_70012=reserved # O
KEYBOARD_KEY_70005=reserved # B
KEYBOARD_KEY_70018=reserved # U
KEYBOARD_KEY_70017=reserved # T
KEYBOARD_KEY_70011=reserved # N
KEYBOARD_KEY_7000b=reserved # H
KEYBOARD_KEY_70013=reserved # P
KEYBOARD_KEY_700e1=reserved # LEFTSHIFT
KEYBOARD_KEY_70054=reserved # /
KEYBOARD_KEY_7001a=reserved # W
KEYBOARD_KEY_70037=reserved # .
KEYBOARD_KEY_7002d=reserved # -
KEYBOARD_KEY_70010=reserved # M
KEYBOARD_KEY_70016=reserved # S
KEYBOARD_KEY_70033=reserved # ;
KEYBOARD_KEY_70028=reserved # ENTER
When the button is pressed for 3 seconds, the same happens as when pressed shortly.
> Maybe you missed the fact that it is also possible to keep the button pressed in order to open a website. This could be used for another function by mapping a key that is only included in that link like /.
Ah, I indeed missed that. Thanks for pointing that out and the updated file :-)
After I recently ordered a new laptop, I have been looking for a USB-C-connected dock to be used with my new laptop. This turned out to be quite complex, given there are really a lot of different bits of technology that can be involved, with various (continuously changing, yes I'm looking at you, USB!) marketing names to further confuse things.
As I'm prone to do, rather than just picking something and seeing if it works, I dug in to figure out how things really work and interact. I learned a ton of stuff in a short time, so I really needed to write this stuff down, both for my own sanity and future self, as well as for others to benefit.
I originally posted my notes on the Framework community forum, but it seemed more appropriate to publish them on my own blog eventually (also because there's no 32,000 character limit here :-p).
There are still quite a few assumptions or unknowns below, so if you have any confirmations, corrections or additions, please let me know in a reply (either here, or in the Framework community forum topic).
Parts of this post are based on info and suggestions provided by others on the Framework community forum, so many thanks to them!
First off, I can recommend this article with a bit of overview and history of the involved USB and Thunderbolt technolgies.
Then, if you're looking for a dock, like I was, the Framework community forum has a good list of docks (focused on Framework operability), and Dan S. Charlton published an overview of Thunderbolt 4 docks and an overview of USB-C DP-altmode docks (both posts with important specs summarized, and occasional updates too).
Then, into the details...
lsusb -v
to tell).[source]
[source]
[source]
Because in DP alt mode all four lines can be used unidirectionally (unlike USB, which is always full-duplex), this means the effective bandwidth for driving a display can be twice as much in alt mode than when tunneling DP over USB4 or using DisplayLink over USB3.
In practice though, DP-altmode on devices usually supports only DP1.4 (HBR3 = 8.1Gbps-per-line) for 4x8.1 = 32.4Gbps of total and unidirectional bandwidth, which is less than TB3/4 with its 20Gbps-per-line for 2x20 = 40Gbps of full-duplex bandwidth. This will change once devices start support DP-altmode 2.0 (UHBR20 = 20Gbps-per-line) for 4x20=80Gbps of unidirectional bandwidth.
[source] and [source] and [source]
[source]
boltctl
(e.g. boltctl
list -a
).echo "module thunderbolt +p" > /sys/kernel/debug/dynamic_debug/control
, plug in your dock and check dmesg
(which will call the USB4 controller/router in the the dock a "USB4 switch" and its interfaces "Ports").For DP:
Bandwidth allocation for DP links happens based on the maximum bandwidth for the negotiated link rate (e.g. HBR3) and seems to happen on a first-come first-served basis. For example, if the first display negotiates 4xHBR3, this takes up 25.92Gbs (after encoding) of bandwidth, leaving only 2xHBR3 or 4xHBR1 for a second display connected.
This means that the order of connecting displays can be relevant to the supported resolutions on each (on multi-output hubs with displays already connected, it seems the hub typically initializes displays in a fixed order).
If the actual bandwidth for the resolution used is less than the allocated bandwidth, the extra bandwidth is not lost, but can still be used for other traffic (like bulk USB traffic, which does not need allocated / reserved bandwidth). [source]
[source] and [source] and [source]
As mentioned, different protocols use different bitrates and different encodings. Here's an overview of these, with the effective transfer rate (additional protocol overhead like packet headers, error correction, etc. still needs to be accounted for, so i.e. effective transfer rate for a mass-storage device will be lower still). Again, these are single-line bandwidths, so total bandwidth is x4 (unidirectional) for DP and x2 (full-duplex) for TB/USB3.2/USB4.
Note that for TB3, the rounded 10/20Gbps speeds are after encoding, so the raw bitrate is actually slightly higher. Thunderbolt 4 is not listed as (AFAICT) this just uses USB4 transfer modes.
[source]
That's it for now. Kudos if you made it this far! As always, feel free to leave a comment if this has been helpful to you, or if you have any suggestions or questions!
Update 2022-03-29: Added "Understanding tunneling and bandwidth limitations" section and some other small updates.
Update 2022-06-10: Mention boltctl
as a tool for getting device info.
This has dramatically improved my understanding of USB, which is even more complicated than I thought. Excellent research!
> This has dramatically improved my understanding of USB, which is even more complicated than I thought. Excellent research!
Good to hear! And then I did not even dive into the details of how USB itself works, with all its descriptors, packet types, transfer modes, and all other complex stuff that is USB (at least 1/2, I haven't looked at USB3 in that much detail yet...).
Hi there are 4 corrupted links with a slipped ")". Just search for ")]".
Ah, I copied the markdown from my original post on Discourse, and it seems my own markdown renderer is slightly different when there are closing parenthesis inside a URL. Urlencoding them fixed it. Thanks!
Thank you for sharing! Which dock (hub?) did you end up buying?
Hey Jens, good question!
I ended up getting the Caldigit TS4, which is probably one of the most expensive ones, but I wanted to get at least TB4 (since USB4 has public specifications, and supports wake-from-sleep, unlike TB3) and this one has a good selection of ports, the host port on the back side (which I think ruled out 70% of the available products or so - I used this great list to make the initial selection).
See also this post for my initial experiences with that dock.
> It is a bit unclear to me how these signals are multiplexed. Apparently TB2 combines all four lines into a single full-duplex channel, suggesting that on TB1 there are two separate channels, but does that mean that one channel carries PCIe and one carries DP on TB1? Or each device connected is assigned to (a part of) either channel?
No. Each Thunderbolt controller contains a switch. Protocol adapters for PCIe and DP attach to the switch and translate between that protocol and the low-level Thunderbolt transport protocol. The protocols are tunneled transparently over the switching fabric and packets for different protocols may be transmitted on the same lane in parallel.
> TB1/2 are backwards compatible with DP, so you can plug in a non-TB DP display into a TB1/2 port. I suspect these ports start out as DP ports and there is some negotiation over the aux channel to switch into TB mode, but I could not find any documentation about this.
When Apple integrated the very first Copper-based Thunderbolt controller, Light Ridge, into their products, they put a mux and redrivers on the mainboard which switches the plug's signals between the Thunderbolt controller and the DP-port on the GPU. They also added a custom microcontroller which snoops on the signal lines, autodetects whether the signals are DP or Thunderbolt, and drives the mux accordingly.
The next-generation chip Cactus Ridge (still Thunderbolt 1) integrated this functionality into the Thunderbolt controller. So the signals go from the DP plug to the Thunderbolt controller, and if it detects DP signals, it routes them through to the GPU via its DP-in pins.
(Source: I've worked on Linux Thunderbolt bringup on Macs and studied their schematics extensively.)
Amazing write up. I spent so long trying to patch together an understanding of TB docks for displays and this has helped TREMENDOUSLY. Thanks again
I just "Waybaked*" your page. It's such a gem. :) Thank you for your writeup Matthijs!
*(archive.org)
Interesting read. After reading it, though, I am still none the wiser on a situation I experienced recently. I have a Moto G6 phone. I bought a USB C dock for it, a "Kapok 11-in-1-USB-C Laptop Dockingstation Dual HDMI Hub" on Amazon. It would charge, but the video out didn't work, and the Ethernet didn't work either. It did work for my Steam Deck, though. How come? I understand some USB C hubs work for android phones, some don't, and I don't know how that works. How does one find a dock that will work with Android?
@Lukas, you replied:
> No. Each Thunderbolt controller contains a switch. Protocol adapters for PCIe and DP attach to the switch and translate between that protocol and the low-level Thunderbolt transport protocol. The protocols are tunneled transparently over the switching fabric and packets for different protocols may be transmitted on the same lane in parallel.
I understand this, but the questions in my post that you are responding to were prompted because the wikipedia page on TB2 ( en.wikipedia.org/wiki/Thunderbolt(interface)#Thunderbolt2 ) says:
> The data-rate of 20 Gbit/s is made possible by joining the two existing 10 Gbit/s-channels, which does not change the maximum bandwidth, but makes using it more flexible.
Which is confusing to me. I could imagine this refers to the TB controller having one adapter that supports 20Gbps DP or PCIe instead of 2 adapters that support 10Gbps each, but TB2 only supports DP1.2 (so only 4.32Gbps) and PCIe bandwidths seem to be multiples of 8Gbps/GTps (but I do see PCIe 2.0 that has 5GTps, so maybe TB1 supported two PCIe 2.0 x2 ports for 10Gbps each and TB2 (also?) supports once PCIe 2.0 x4 port for 20Gbps?
@Someone, thanks for clarifying how DP vs TB1/2 negotiation worked, I'll edit my post accordingly.
@MikeNap, I feel your pain, great to hear my post has been helpful :-D
@Omar, cool!
@cheater, my first thought about video is that your dock might be of the DisplayLink (or some other video-over-USB) kind and your Android phone might not have the needed support/drivers. A quick google suggests this might need an app from the Play Store.
Or maybe the reverse is true and the dock only supports DP alt mode (over USB C) to get the display signal and your Android does not support this (but given it is a dual HDMI dock without thunderbolt, this seems unlikely, since the only way to get multiple display outputs over DP alt mode is to use MST and I think not many docks support that - though if the dock specs state that dual display output is not supported on Mac, that might be a hint you have an MST dock after all, since MacOS does not support MST).
As for ethernet, that might also be a driver issue?
Hello,
Thanks for this instructive post :)
I spotted some typos if it may help:
"can be combied" -> "can be combined"
"USB3.2 allows use of a all" -> "USB3.2 allows use of all"
"but a also" -> " but also"
"up to 20Gps-per-line" -> "up to 20Gbps-per-line"
"an USB3 10Gpbs" -> "an USB3 10Gbps"
"leaving ony" -> "leaving only"
"reverse-engineerd" -> "reverse-engineered"
"a single hub or dock with and a thunderbolt or USB4" -> "a single hub or dock with a thunderbolt or USB4"
"certifified" -> "certified"
Best regards, Laurent Lyaudet
@Laurent, thanks for your comments, I've fixed all of the typos :-D Especially "certifified" is nice, that should have been the correct word, sounds much nicer ;-p
Hello :)
Google does understand (incomplete URL because of SPAM protection):
/search?q=certifified+meme&source=lnms&tbm=isch&sa=X
But there is not yet an exact match for "certifified meme" ;P
Thanks you very much for your article!
I am interested in buying the Caldigit TS4. But when I kindly emailed caldigit to find out ways to work around these following issues:
By taking care to inform them of the expectations regarding the docks of the users of the laptop framework. And giving them your blog link and others.
I got this response from caldigit: " Currently our products are not supported in a Linux environment and so we do not run any tests or able to offer any troubleshooting for Linux ."
Finally, Matthijs could you tell us, please, why you didn't choose the lenovo dock? It is MST compatible, and supports vlan ID at rj45 level and seems officially supported under linux?
Thanks in advance !
@Al, thanks for your response, and showing us Caldigits responses to your question.
As for why I chose the Caldigit instead of the Lenovo - I think it was mostly a slightly better selection of ports (and a card reader). As for Linux support - usually nobody officially supports linux, but I've read some succesful reports of people using the TS4 under Linux (and I have been using it without problems under linux as well - everything just works).
As for MST and VLANs - I do not think I really realized that these were not supported, but I won't really miss them anyway (though VLAN support might have come in handy - good to know it is not supported. Is this a hardware limitation or driver? If the latter, is it really not supported under Linux, then? It's just an intel network card IIRC?).
As for black-box firmware - that's something I do care about, but I do not expect there will be any TB dock available that has open firmware, or does Lenovo publish sources?
Thank you for your reply ! For the vlan, it seems to be tested, therefore! because the site indicated in 2019: www.caldigit.com/do-caldigit-docks-support-vlan-feature
Indeed, I don't know any free firmware for lenovo, I was trying to find out their feeling to libre software things... Because it's always annoying to be secure everywhere except at one place in the chain: 4G stack, intel wifi firmware , and dock..
Hi Matthijs;
Thanks for the amazing writeup, very useful. I just got my framework laptop and I bought the TS4 largely because of your recommendation (no pressure! :D ) but also because all the work you put in, it seemed the 'most documented'.
I have a question regarding usage of USB ports. I've got my framework laptop plugged into it with the thunderbolt cable. I have a 10-port StarTech USB 3 hub and I plug most everything into that, then that hub is plugged into one of the ports on the TS4 (the lower unpowered one, specifically -- The 10 port hub has its own power).
The 10 port hub is about half full. Keyboard (USB3), mouse (USB 1? 2? no idea, its reasonably new but cheap), a scanner which is usually turned off (USB1), a blueray player that takes two ports (USB2), and a printer that is usually turned off (USB2).
Most of the time, everything seems to work okay, but now and then I get a weirdness. Like, I left my computer sitting for awhile (30 min ~ 1 hour) and came back to it. The ethernet had stopped working out of the blue -- my laptop didn't sleep/suspend or anything, I have basically all the power saving stuff off because in my experience thatj ust causes weird chaos in Linux.
I unplugged/replugged in the TS4, and ethernet came back, but now my mouse stopped working (everything else seemed okay). I unplugged the mouse from my 10 port hub and put it in the second TS4 unpowered power and now its back.
Is there a power saving thing on the TS4 that I can turn off? Or is this something else? Or have you not encountered this one in your usage? So far, nothing has happened like this while I'm at the computer actively using it.
If you have any ideas, I'd really appreciate hearing them (though I understand totally if all this is just weird and you have no idea ...!)
Thanks again!!
@Steven, thanks for your comment!
As for your issues with ethernet or other USB devices that stop working after some time - I have no such experience - so far the dock has been rather flawless for me. I have seen some issues with USB devices, but I think those were only triggered by suspend/resume and/or plugging/unplugging the TS4, and those occurred very rarely (and I think haven't happened in the last few months at all).
For reference, I'm running Ubuntu 22.10 on my laptop. I also have a number of USB hubs behind the TS4 (one in my keyboard, one in my monitor and two more 7-port self-powered multi-TT hubs), so that is similar to your setup. I have not modified any (power save or other) settings on the TS4 - I'm not even sure if there are settings to tweak other than the Linux powersaving settings (which I haven't specifically looked at or changed).
So I don't think I have anything for you, except wishing you good luck with your issues. If you ever figure them out, I'd be curious to hear what you've found :-)
For a while, I've been considering replacing Grubby, my trusty workhorse laptop, a Thinkpad X201 that I've been using for the past 11 years. These thinkpads are known to last, and indeed mine still worked nicely, but over the years lost Bluetooth functionality, speaker output, one of its USB ports (I literally lost part of the connector), some small bits of the casing (dropped something heavy on it), the fan sometimes made a grinding noise, and it was getting a little slow at times (but still fine for work). I had been postponing getting a replacement, though, since having to figure out what to get, comparing models, reading reviews is always a hassle (especially for me...).
Then, when I first saw the Framework laptop last year, I was immediately sold. It's a laptop that aims to be modular, in the sense that it can be easily repaired and upgraded. To be honest, this did not seem all that special to me at first, but apparently in the 11 years since I last bought a laptop, manufacturers have been more using glue rather than screws, and solder rather than sockets, which is a trend that Framework hopes to turn.
In addition to the modularity, I like the fact they make repairability and upgradability an explicit goal, in attempt to make the electronics ecosystem more sustainable (they remind me of Fairphone in that sense). On top of that, it seems that this is also a really well made laptop, with a lot of attention to details, explicit support for Linux, open-source where possible (e.g. code for the embedded controller is open, ), flexible expansion ports using replacable modules, encouraging third parties to build and market their own expansion cards and addons (with open-source reference designs available), a mainboard that can be used standalone too (makes for a nice SBC after a mainboard upgrade), decent keyboard, etc.
The only things that I'm less enthusiastic about are the reflective screen (I had that on my previous laptop and I remember liking the switch to a matte screen, but I guess I'll get used to that), having just four expansion ports (the only fixed port is an audio jack, everything else - USB, displays, card reader - has to go through expansion modules, so we'll see if I can get by with four ports) and the lack of an ethernet port (apparently there is an ethernet expansion module in the works, but I'll probably have to get a USB-to-ethernet module in the meanwhile).
Unfortunately, when I found the Framework laptop a few months ago, they were not actually being sold yet, though they expected to open up pre-orders in December. I really hoped Grubby would last long enough so I could get a Framework laptop. Then pre-orders opened only for US and Canada, with shipping to the EU announced for Q1 this year. Then they opened up orders for Germany, France and the UK, and I still had to wait...
So when they opened up pre-orders in the Netherlands last month, I immediately placed my order. They are using a batched shipping system and my batch is supposed to ship "in March" (part of the batch has already been shipped), so I'm hoping to get the new laptop somewhere it the coming weeks.
I suspect that Grubby took notice, because last friday, with a small sputter, he powered off unexpectedly and has refused to power back on. I've tried some CPR, but no luck so far, so I'm afraid it's the end for Grubby. I'm happy that I already got my Framework order in, since now I just borrowed another laptop as a temporary solution rather than having to panic and buy something else instead.
So, I'm eager for my Framework laptop to be delivered. Now, I just need to pick a new name, and figure out which Thunderbolt dock I want... (I had an old-skool docking station for my Thinkpad, which worked great, but with USB-C and Thunderbolt's single cable for power, display, usb and ethernet, there is now a lot more choice in docks, but more on that in my next post...).
Hey Matthijs! I'm currently looking into replacing my work machine and my eye also caught the Framework.. I didn't see any further posts on your Framework, is that a good thing or a bad thing? :-D
Are you happy with it/would you recommend it? One of the concerns I have is dust piling up in the gaps between the components (for instance between the screen and the bezel). Is that something you're experiencing? Looking forward to hearing your thoughts!
W00ps, you comment got stuck in my moderation queue, only approved it just now.
Making another post about my experiences has been on my list for a while, but busy, busy, busy... The TL;DR is that I'm quite happy with the laptop, it works really well. I had an issue with the webcam, but that was replaced under warranty (and a breeze to replace).
I do have a few gripes with it: Limited battery time (especially high drain in suspend mode), the glossy screen and somewhat limited brightness, but AFAIU all of these have improved in the newer revisions of the laptop.
> Are you happy with it/would you recommend it
Yes, definitely. In addition to being happy with the hardware, I'm also happy with the way Framework has been following up with new designs that are compatible with the older generations (allowing upgrades), as well as a lot o openness about their designs (within limits of their own NDA's) and willingness to collaborate with third parties (e.g. they recently announced a RISC-V motherboard produced by someone else).
> One of the concerns I have is dust piling up in the gaps between the components (for instance between the screen and the bezel). Is that something you're experiencing? Looking forward to hearing your thoughts!
Nope, not at all (it did not create a problem, and I just looked behind the bezel and no dust there. It closes quite neatly.
Recently, I've been working with STM32 chips for a few different projects and customers. These chips are quite flexible in their pin assignments, usually most peripherals (i.e. an SPI or UART block) can be mapped onto two or often even more pins. This gives great flexibility (both during board design for single-purpose boards and later for a more general purpose board), but also makes it harder to decide and document the pinout of a design.
ST offers STM32CubeMX, a software tool that helps designing around an STM32 MCU, including deciding on pinouts, and generating relevant code for the system as well. It is probably a powerful tool, but it is a bit heavy to install and AFAICS does not really support general purpose boards (where you would choose between different supported pinouts at runtime or compiletime) well.
So in the past, I've used a trusted tool to support this process: A spreadsheet that lists all pins and all their supported functions, where you can easily annotate each pin with all the data you want and use colors and formatting to mark functions as needed to create some structure in the complexity.
However, generating such a pinout spreadsheet wasn't particularly easy. The tables from the datasheet cannot be easily copy-pasted (and the datasheet has the alternate and additional functions in two separate tables), and the STM32CubeMX software can only seem to export a pinout table with alternate functions, not additional functions. So we previously ended up using the CubeMX-generated table and then adding the additional functions manually, which is annoying and error-prone.
So I dug around in the CubeMX data files a bit, and found that it has an XML file for each STM32 chip that lists all pins with all their functions (both alternate and additional). So I wrote a quick Python script that parses such an XML file and generates a CSV script. The script just needs Python3 and has no additional dependencies.
To run this script, you will need the XML file for the MCU you are interested in from inside the CubeMX installation. Currently, these only seem to be distributed by ST as part of CubeMX. I did find one third-party github repo with the same data, but that wasn't updated in nearly two years). However, once you generate the pin listing and publish it (e.g. in a spreadsheet), others can of course work with it without needing CubeMX or this script anymore.
For example, you can run this script as follows:
$ ./stm32pinout.py /usr/local/cubemx/db/mcu/STM32F103CBUx.xml
name,pin,type
VBAT,1,Power
PC13-TAMPER-RTC,2,I/O,GPIO,EXTI,EVENTOUT,RTC_OUT,RTC_TAMPER
PC14-OSC32_IN,3,I/O,GPIO,EXTI,EVENTOUT,RCC_OSC32_IN
PC15-OSC32_OUT,4,I/O,GPIO,EXTI,ADC1_EXTI15,ADC2_EXTI15,EVENTOUT,RCC_OSC32_OUT
PD0-OSC_IN,5,I/O,GPIO,EXTI,RCC_OSC_IN
(... more output truncated ...)
The script is not perfect yet (it does not tell you which functions correspond to which AF numbers and the ordering of functions could be improved, see TODO comments in the code), but it gets the basic job done well.
You can find the script in my "scripts" repository on github.
Update: It seems the XML files are now also available separately on github: https://github.com/STMicroelectronics/STM32_open_pin_data, and some of the TODOs in my script might be solvable.
For this blog, I wanted to include some nicely-formatted formulas. An easy way to do so, is to use MathJax, a javascript-based math processor where you can write formulas using (among others) the often-used Tex math syntax.
However, I use Markdown to write my blogposts and including formulas directly in the text can be problematic because Markdown might interpret part of my math expressions as Markdown and transform them before MathJax has had a chance to look at them. In this post, I present a customized MathJax configuration that solves this problem in a reasonable elegant way.
An obvious solution is to put the match expression in Markdown code
blocks (or inline code using backticks), but by default MathJax does not
process these. MathJax can be reconfigured to also typeset the contents
of <code>
and/or <pre>
elements, but since actual code will likely
contain parts that look like math expressions, this will likely cause
your code to be messed up.
This problem was described in more detail by Yihui Xie in a
blogpost, along with a solution that preprocesses the DOM to look
for <code>
tags that start and end with an math expression start and
end marker, and if so strip away the <code>
tag so that MathJax will
process the expression later. Additionally, he translates any expression
contained in single dollar signs (which is the traditional Tex way to
specify inline math) to an expression wrapped in \(
and \)
, which is
the only way to specify inline math in MathJax (single dollars are
disabled since they would be too likely to cause false positives).
I considered using his solution, but it explicitly excludes code blocks
(which are rendered as a <pre>
tag containing a <code>
tag in
Markdown), and I wanted to use code blocks for centered math expressions
(since that looks better without the backticks in my Markdown source).
Also, I did not really like that the script modifies the DOM and has a
bunch of regexes that hardcode what a math formula looks like.
So I made an alternative implementation that configures MathJax to
behave as intended. This is done by overriding the normal automatic
typesetting in the pageReady
function and instead explicitly
typesetting all code tags that contain exactly one math expression.
Unlike the solution by Yihui Xie, this:
<code>
elements (e.g. no formulas in normal text), because the default
typesetting is replaced.<code>
elements inside <pre>
elements
(but this can be easily changed using the parent tag check from Yihui
Xie's code).pageReady
event, so the script does not have
to be at the end of the HTML page.You can find the MathJax configuration for this inline at the end of this post. To use it, just put the script tag in your HTML before the MathJax script tag (or see the MathJax docs for other ways).
To use it, just use the normal tex math syntax (using single or double $
signs) inside a code block (using backticks or an indented block) in any
combination. Typically, you would use single $
delimeters together with
backticks for inline math. You'll have to make sure that the code block
contains exactly a single MathJax expression (and maybe some whitespace), but
nothing else. E.g. this Markdown:
Formulas *can* be inline: `$z = x + y$`.
Renders as: Formulas can be inline: $z = x + y$
.
The double $$
delimeter produces a centered math expression. This works
within backticks (like Yihui shows) but I think it looks better in the Markdown
if you use an indented block (which Yihui's code does not support). So for
example this Markdown (note the indent):
$$a^2 + b^2 = c^2$$
Renders as:
$$a^2 + b^2 = c^2$$
Then you can also use more complex, multiline expressions. This indented block of Markdown:
$$
\begin{vmatrix}
a & b\\
c & d
\end{vmatrix}
=ad-bc
$$
Renders as:
$$
\begin{vmatrix}
a & b\\
c & d
\end{vmatrix}
=ad-bc
$$
Note that to get Markdown to display the above example blocks, i.e. code
blocks that start and with $$
, without having MathJax process them, I
used some literal HTML in my Markdown source. For example, in my blog's
markdown source, the first block above literall looks like this:
<pre><code><span></span> $$a^2 + b^2 = c^2$$</code></pre>
Markdown leaves the HTML tags alone, and the empty span ensures that the script below does not process the contents of the code block (since it only processes code blocks where the full contents of the block are valid MathJax code).
So, here is the script that I am now using on this blog:
<script type="text/javascript">
MathJax = {
options: {
// Remove <code> tags from the blacklist. Even though we pass an
// explicit list of elements to process, this blacklist is still
// applied.
skipHtmlTags: { '[-]': ['code'] },
},
tex: {
// By default, only \( is enabled for inline math, to prevent false
// positives. Since we already only process code blocks that contain
// exactly one math expression and nothing else, it is also fine to
// use the nicer $...$ construct for inline math.
inlineMath: { '[+]': [['$', '$']] },
},
startup: {
// This is called on page ready and replaces the default MathJax
// "typeset entire document" code.
pageReady: function() {
var codes = document.getElementsByTagName('code');
var to_typeset = [];
for (var i = 0; i < codes.length; i++) {
var code = codes[i];
// Only allow code elements that just contain text, no subelements
if (code.childElementCount === 0) {
var text = code.textContent.trim();
inputs = MathJax.startup.getInputJax();
// For each of the configured input processors, see if the
// text contains a single math expression that encompasses the
// entire text. If so, typeset it.
for (var j = 0; j < inputs.length; j++) {
// Only use string input processors (e.g. tex, as opposed to
// node processors e.g. mml that are more tricky to use).
if (inputs[j].processStrings) {
matches = inputs[j].findMath([text]);
if (matches.length == 1 && matches[0].start.n == 0 && matches[0].end.n == text.length) {
// Trim off any trailing newline, which otherwise stays around, adding empty visual space below
code.textContent = text;
to_typeset.push(code);
code.classList.add("math");
if (code.parentNode.tagName == "PRE")
code.parentNode.classList.add("math");
break;
}
}
}
}
}
// Code blocks to replace are collected and then typeset in one go, asynchronously in the background
MathJax.typesetPromise(to_typeset);
},
},
};
</script>
Update 2020-08-05: Script updated to run typesetting only once, and
use typesetPromise
to run it asynchronously, as suggested by Raymond
Zhao in the comments below.
Update 2020-08-20: Added some Markdown examples (the same ones Yihui Xie used), as suggested by Troy.
Update 2021-09-03: Clarified how the script decides which code blocks to process and which to leave alone.
Hey, this script works great! Just one thing: performance isn't the greatest. I noticed that upon every call to MathJax.typeset, MathJax renders the whole document. It's meant to be passed an array of all the elements, not called individually.
So what I did was I put all of the code elements into an array, and then called MathJax.typesetPromise (better than just typeset) on that array at the end. This runs much faster, especially with lots of LaTeX expressions on one page.
Hey Raymond, excellent suggestion. I've updated the script to make these changes, works perfect. Thanks!
What a great article! Congratulations :)
Can you please add a typical math snippet from one of your .md files? (Maybe the same as the one Yihui Xie uses in his post.)
I would like to see how you handle inline/display math in your markdown.
Hey Troy, good point, examples would really clarify the post. I've added some (the ones from Yihui Xie indeed) that show how to use this from Markdown. Hope this helps!
Hi, this code looks pretty great! One thing I'm not sure about is how do you differentiate latex code block and normal code block so that they won't be rendered to the same style?
Hi Xiao, thanks for your comment. I'm not sure I understand your question completely, but what happens is that both the math/latex block and a regular code block are processed by markdown into a <pre><code>...</code></pre>
block. Then the script shown above picks out all <code>
blocks, and passes the content of each to MathJax for processing.
Normally MathJax finds any valid math expression (delimited by e.g. $$
or $
) and processes it, but my script has some extra checks to only apply MathJax processing if the entire <code>
block is a single MathJax block (in other words, if it starts and ends with $$
or $
).
This means that regular code blocks will not be MathJax processed and stay regular code blocks. One exception is when a code block starts and ends with e.g. $$
but you still do not want it processed (like the Markdown-version of the examples I show above), but I applied a little hack with literal HTML tags and an empty <span>
for that (see above, I've updated the post to show how I did this).
Or maybe your question is more about actually styling regular code blocks vs math blocks? For that, the script adds a math
class to the <code>
and <pre>
tags, which I then use in my CSS to slightly modify the styling (just remove the grey background for math blocks, all other styling is handled by Mathjax already it seems).
Does that answer your question?
Comments are closed for this story.
Or: Forcing Linux to use the USB HID driver for a non-standards-compliant USB keyboard.
For an interactive art installation by the Spullenmannen, a friend asked me to have a look at an old paint mixing terminal that he wanted to use. The terminal is essentially a small computer, in a nice industrial-looking sealed casing, with a (touch?) screen, keyboard and touchpad. It was by "Lacour" and I think has been used to control paint mixing machines.
They had already gotten Linux running on the system, but could not get the keyboard to work and asked me if I could have a look.
The keyboard did work in the BIOS and grub (which also uses the BIOS), so we know it worked. Also, the BIOS seemed pretty standard, so it was unlikely that it used some very standard protocol or driver and I guessed that this was a matter of telling Linux which driver to use and/or where to find the device.
Inside the machine, it seemed the keyboard and touchpad were separate devices, controlled by some off-the-shelf microcontroller chip (probably with some custom software inside). These devices were connected to the main motherboard using a standard 10-pin expansion header intended for external USB ports, so it seemed likely that these devices were USB ports.
And indeed, looking through lsusb
output I noticed two unkown devices
in the list:
# lsusb
Bus 002 Device 003: ID ffff:0001
Bus 002 Device 002: ID 0000:0003
(...)
These have USB vendor ids of 0x0000 and 0xffff, which I'm pretty sure are not official USB-consortium-assigned identifiers (probably invalid or reserved even), so perhaps that's why Linux is not using these properly?
Running lsusb
with the --tree
option allows seeing the physical port
structure, but also shows which drivers are bound to which interfaces:
# lsusb --tree
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
|__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port 2: Dev 3, If 0, Class=Human Interface Device, Driver=, 12M
(...)
This shows that the keyboard (Dev 3) indeed has no driver, but the
touchpad (Dev 2) is already bound to usbhid. And indeed, runnig cat
/dev/input/mice
and then moving over the touchpad shows that some
output is being generated, so the touchpad was already working.
Looking at the detailed USB descriptors for these devices, shows that they are both advertised as supporting the HID interface (Human Interface Device), which is the default protocol for keyboards and mice nowadays:
# lsusb -d ffff:0001 -v
Bus 002 Device 003: ID ffff:0001
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 255 Vendor Specific Class
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0xffff
idProduct 0x0001
bcdDevice 0.01
iManufacturer 1 Lacour Electronique
iProduct 2 ColorKeyboard
iSerial 3 SE.010.H
(...)
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 1 Boot Interface Subclass
bInterfaceProtocol 1 Keyboard
(...)
# lsusb -d 0000:00003 -v
Bus 002 Device 002: ID 0000:0003
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x0000
idProduct 0x0003
bcdDevice 0.00
iManufacturer 1 Lacour Electronique
iProduct 2 Touchpad
iSerial 3 V2.0
(...)
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 1 Boot Interface Subclass
bInterfaceProtocol 2 Mouse
(...)
So, that should make it easy to get the keyboard working: Just make sure
the usbhid
driver is bound to it and that driver will be able to
figure out what to do based on these descriptors. However, apparently
something is preventing this binding from happening by default.
Looking back at the USB descriptors above, one interesting difference is
that the keyboard has bDeviceClass
set to "Vendor
specific", whereas the touchpad has it set to 0, which
means "Look at interface descriptors. So that seems the
most likely reason why the keyboard is not working, since "Vendor
Specific" essentially means that the device might not adhere to any of
the standard USB protocols and the kernel will probably not start using
this device unless it knows what kind of device it is based on the USB
vendor and product id (but since those are invalid, these are unlikely
to be listed in the kernel).
usbhid
So, we need to bind the keyboard to the usbhid
driver. I know of two
ways to do so, both through sysfs.
You can assign extra USB vid/pid pairs to a driver through the new_id
sysfs file. In this case, this did not work somehow:
# echo ffff:0001 > /sys/bus/usb/drivers/usbhid/new_id
bash: echo: write error: Invalid argument
At this point, I should have stopped and looked up the right syntax used
for new_id
, since this was actually the right approach, but I was
using the wrong syntax (see below). Instead, I tried some other stuff
first.
The second way to bind a driver is to specify a specific device, identified by its sysfs identifier:
# echo 2-2:1.0 > /sys/bus/usb/drivers/usbhid/bind
bash: echo: write error: No such device
The device identifier used here (2-2:1.0
) is directory name below
/sys/bus/usb/devices
and is, I think, built like
<bus>-<port>:1.<interface>
(where 1 might the configuration?). You can
find this info in the lsusb --tree
output:
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
|__ Port 2: Dev 3, If 0, Class=Human Interface Device, Driver=, 12M
I knew that the syntax I used for the device id was correct, since I
could use it to unbind and rebind the usbhid
module from the touchpad.
I suspect that there is some probe mechanism in the usbhid
driver that
runs after you bind the driver which tests the device to see if it is
compatible, and that mechanism rejects it.
As I usually do when I cannot get something to work, I dive into the source code. I knew that Linux device/driver association usually works with a driver-specific matching table (that tells the underlying subsystem, such as the usb subsystem in this case, which devices can be handled by a driver) or probe function (which is a bit of driver-specific code that can be called by the kernel to probe whether a device is compatible with a driver). There is also configuration based on Device Tree, but AFAIK this is only used in embedded platforms, not on x86.
Looking at the usbhid_probe()
and
usb_kbd_probe()
functions, I did not see any
conditions that would not be fulfilled by this particular USB device.
The match table for usbhid
also only matches the
interface class and not the device class. The same goes for the
module.alias
file, which I read might also be involved (though I am
not sure how):
# cat /lib/modules/*/modules.alias|grep usbhid
alias usb:v*p*d*dc*dsc*dp*ic03isc*ip*in* usbhid
So, the failing check must be at a lower level, probably in the usb subsystem.
Digging a bit further, I found the usb_match_one_id_intf()
function,
which is the core of matching USB drivers to USB device interfaces. And
indeed, it says:
/* The interface class, subclass, protocol and number should never be
* checked for a match if the device class is Vendor Specific,
* unless the match record specifies the Vendor ID. */
So, the entry in the usbhid
table is being ignored since it matches
only the interface, while the device class is "Vendor Specific". But how
to fix this?
A little but upwards in the call stack, is a bit of code that matches a
driver to an usb device or interface. This has two
sources: The static table from the driver source code, and a dynamic
table that can be filled with (hey, we know this part!) the new_id
file in sysfs. So that suggests that if we can get an entry into this
dynamic table, that matches the vendor id, it should work even with a
"Vendor Specific" device class.
new_id
Looking further at how this dynamic table is filled, I found the code
that handles writes to new_id
, and it parses it input
like this:
fields = sscanf(buf, "%x %x %x %x %x", &idVendor, &idProduct, &bInterfaceClass, &refVendor, &refProduct);
In other words, it expects space separated values, rather than just a colon separated vidpid pair. Reading on in the code shows that only the first two (vid/pid) are required, the rest is optional. Trying that actually works right away:
# echo ffff 0001 > /sys/bus/usb/drivers/usbhid/new_id
# dmesg
(...)
[ 5011.088134] input: Lacour Electronique ColorKeyboard as /devices/pci0000:00/0000:00:1d.1/usb2/2-2/2-2:1.0/0003:FFFF:0001.0006/input/input16
[ 5011.150265] hid-generic 0003:FFFF:0001.0006: input,hidraw3: USB HID v1.11 Keyboard [Lacour Electronique ColorKeyboard] on usb-0000:00:1d.1-2/input0
After this, I found I can now use the unbind
file to unbind the
usbhid
driver again, and bind
to rebind it. So it seems that using
bind
indeed still goes through the probe/match code, which previously
failed but with the entry in the dynamic table, works.
So nice that it works, but this dynamic table will be lost on a reboot.
How to make it persistent? I can just drop this particular line into the
/etc/rc.local
startup script, but that does not feel so elegant (it
will probably work, since it only needs the usbhid
module to be loaded
and should work even when the USB device is not known/enumerated yet).
However, as suggested by this post, you can also
use udev to run this command at the moment the USB devices is "added"
(i.e. enumerated by the kernel). To do so, simply drop a file in
/etc/udev/rules.d
:
$ cat /etc/udev/rules.d/99-keyboard.rules
# Integrated USB keyboard has invalid USB VIDPID and also has bDeviceClass=255,
# causing the hid driver to ignore it. This writes to sysfs to let the usbhid
# driver match the device on USB VIDPID, which overrides the bDeviceClass ignore.
# See also:
# https://unix.stackexchange.com/a/165845
# https://github.com/torvalds/linux/blob/bf3bd966dfd7d9582f50e9bd08b15922197cd277/drivers/usb/core/driver.c#L647-L656
# https://github.com/torvalds/linux/blob/3039fadf2bfdc104dc963820c305778c7c1a6229/drivers/hid/usbhid/hid-core.c#L1619-L1623
ACTION=="add", ATTRS{idVendor}=="ffff", ATTRS{idProduct}=="0001", RUN+="/bin/sh -c 'echo ffff 0001 > /sys/bus/usb/drivers/usbhid/new_id'"
And with that, the keyboard works automatically at startup. Nice :-)
For a customer, I've been looking at RS-485 and MODBUS, two related protocols for transmitting data over longer distances, and the various Arduino libraries that exist to work with them.
They have been working on a project consisting of multiple Arduino boards that have to talk to each other to synchronize their state. Until now, they have been using I²C, but found that this protocol is quite susceptible to noise when used over longer distances (1-2m here). Combined with some limitations in the AVR hardware and a lack of error handling in the Arduino library that can cause the software to lock up in the face of noise (see also this issue report), makes I²C a bad choice in such environments.
So, I needed something more reliable. This should be a solved problem, right?
A commonly used alternative, also in many industrial settings, are RS-485 connections. This is essentially an asynchronous serial connection (e.g. like an UART or RS-232 serial port), except that it uses differential signalling and is a multipoint bus. Differential signalling means two inverted copies of the same signal are sent over two impedance-balanced wires, which allows the receiver to cleverly subtract both signals to cancel out noise (this is also what ethernet and professional audio signal does). Multipoint means that there can be more than two devices on the same pair of wires, provided that they do not transmit at the same time. When combined with shielded and twisted wire, this should produce a very reliable connection over long lengths (up to 1000m should be possible).
However, RS-485 by itself is not everything: It just specifies the physical layer (the electrical connections, or how to send data), but does not specify any format for the data, nor any way to prevent multiple devices from talking at the same time. For this, you need a data link or arbitration protocol running on top of RS-485.
A quick look around shows that MODBUS is very commonly used protocol on top of RS-485 (but also TCP/IP or other links) that handles the data link layer (how to send data and when to send). This part is simple: There is a single master that initiates all communication, and multiple slaves that only reply when asked something. Each slave has an address (that must be configured manually beforehand), the master needs no address.
MODBUS also specifies a simple protocol that can be used to read and write addressed bits ("Coils" and "Inputs") and addressed registers, which would be pretty perfect for the usecase I'm looking at now.
So, I have some RS-485 transceivers (which translate regular UART to RS-485) and just need some Arduino library to handle the MODBUS protocol for me. A quick Google search shows there are quite a few of them (never a good sign). A closer look shows that none of them are really good...
There are some more detailed notes per library below, but overall I see the following problems:
HardwareSerial
instances (and sometimes also
SoftwareSerial
instances, but only two librares actually supports
running on arbitrary Stream
instances (while this is pretty much
the usecase that Stream
was introduced for).Ideally, I would like to see a library:
- That can be configured using a Stream
instance and an (optional) tx
enable pin.
- Has a separation between the MODBUS application protocol and the
RS-485-specific datalink protocol, so it can be extended to other
transports (e.g. TCP/IP) as well.
- Where the master has both synchronous (blocking) and asynchronous request
methods.
The xbee-arduino library, which also does serial request-response handling would probably serve as a good example of how to combine these in a powerful API. - Where the slave can have multiple areas defined (e.g. a block of 16 registers starting at address 0x10). Each area can have some memory allocated that will be read or written directly, or a callback function to do the reading or writing. In both cases, a callback that can be called after something was read or writen (passing the area pointer and address or something) can be configured too. Areas should probably be allowed to overlap, which also allows having a "fallback" (virtual) area that covers all other addresses.
These areas should be modeled as objects that are directly accessible to the sketch, so the sketch can read and write the data without having to do linked-list lookups and without needing to know the area-to-adress mapping. - That supports sending and receiving raw messages as well (to support custom function codes). - That does not do any heap allocation (or at least allows running with static allocations only). This can typically be done using static (global) variables allocated by the sketch that are connected as a linked list in the library.
I suspect that given my requirements, this would mean starting a new library from scratch (using an existing library as a starting point would always mean significant redesigning, which is probably more work than its worth). Maybe some parts (e.g. specific things like packet formatting and parsing) can be reused, though.
Of course, I do not really have time for such an endeavor and the customer for which I started looking at this certainly has no budget in this project for such an investment. This means I will probably end up improvising with the MCCI library, or use some completely different or custom protocol instead of MODBUS (though the Arduino library offerings in this area also seem limited...). Maybe CANBus?
However, if you also find yourself in the same situation, maybe my above suggestions can serve as inspiration (and if you need this library and have some budget to get it written, feel free to contact me).
So, here's the list of libraries I found.
HardwareSerial
objects.boards.txt
.Update 2020-06-26: Added smarmengol/Modbus-Master-Slave-for-Arduino to the list
Update 2020-10-07: Added asukiaaa/arduino-rs485 to the list
Hi, did you see this one? github.com/smarmengol/Modbus-Master-Slave-for-Arduino
Took me a while to recognize it, but I had indeed seen it before: the MCCI library is a fork of that Smarmengol library.
However, it seems that since I wrote this post, a little new development happened on the Smarmengol version, which now allows using arbitrary Streams to be used, which is useful. It still suffers from some of the other problems, but I have now included it separately in the list.
Thanks for pointing it out!
Check out the omzlo NoCAN plattform. It uses CAN and has already implemented a pub/sub functionality that is great for simple communication between nodes. It requires the NoCAN boards and a Raspberry Pi as a master.
Check out the omzlo NoCAN plattform. It uses CAN and has already implemented a pub/sub functionality that is great for simple communication between nodes. It requires the NoCAN boards and a Raspberry Pi as a master.
Written for ESP, asynchronous that can handle multiple sockets as a modbus TCP server. github.com/eModbus/eModbus I use this together with your library for reading a dsmr, and serving the data as a modbus server.
Hi, Good overview!
Hereby some more suggestions:
Slave library:
github.com/epsilonrt/modbus-arduino -- This library is a further development of the andresarmento slave library.
Master libraries:
github.com/Hantuch/ModbusMaster -- fork of the master library of 4-20ma, adding the possibility to address multiple slaves
github.com/EnviroDIY/SensorModbusMaster -- this library has very convenient functions, but is limited to 1 slave; works with softwareserial
github.com/CMB27/ModbusRTUMaster -- a very new library, has easy to use functions; supports multiple slaves and works with softwareserial
(I've only tested the slave library so far)
Comments are closed for this story.
Thanks for the overview, it is also a very helpful collection of links. I also ordered the Milk-V Duo 256, I will see what can be done with it at this stage. Have a nice day!
Would be cool to do a custom pcb using the same design