"In het verleden behaalde resultaten bieden geen garanties voor de toekomst"

These are the ramblings of Matthijs Kooijman, concerning the software he hacks on, hobbies he has and occasionally his personal life.

Questions? Praise? Blame? Feel free to contact me.

My old blog (pre-2006) is also still available.

Sun Mon Tue Wed Thu Fri Sat

20 21 22 23
24 25 26 27 28 29 30
31
Tag Cloud
&
(With plugins: config, extensionless, hide, tagging, Markdown, macros, breadcrumbs, calendar, directorybrowse, entries_index, feedback, flavourdir, include, interpolate_fancy, listplugins, menu, pagetype, preview, seemore, storynum, storytitle, writeback_recent, moreentries)
Valid XHTML 1.0 Strict & CSS
Script to generate pinout listings for STM32 MCUs

Recently, I've been working with STM32 chips for a few different projects and customers. These chips are quite flexible in their pin assignments, usually most peripherals (i.e. an SPI or UART block) can be mapped onto two or often even more pins. This gives great flexibility (both during board design for single-purpose boards and later for a more general purpose board), but also makes it harder to decide and document the pinout of a design.

ST offers STM32CubeMX, a software tool that helps designing around an STM32 MCU, including deciding on pinouts, and generating relevant code for the system as well. It is probably a powerful tool, but it is a bit heavy to install and AFAICS does not really support general purpose boards (where you would choose between different supported pinouts at runtime or compiletime) well.

So in the past, I've used a trusted tool to support this process: A spreadsheet that lists all pins and all their supported functions, where you can easily annotate each pin with all the data you want and use colors and formatting to mark functions as needed to create some structure in the complexity.

However, generating such a pinout spreadsheet wasn't particularly easy. The tables from the datasheet cannot be easily copy-pasted (and the datasheet has the alternate and additional functions in two separate tables), and the STM32CubeMX software can only seem to export a pinout table with alternate functions, not additional functions. So we previously ended up using the CubeMX-generated table and then adding the additional functions manually, which is annoying and error-prone.

So I dug around in the CubeMX data files a bit, and found that it has an XML file for each STM32 chip that lists all pins with all their functions (both alternate and additional). So I wrote a quick Python script that parses such an XML file and generates a CSV script. The script just needs Python3 and has no additional dependencies.

To run this script, you will need the XML file for the MCU you are interested in from inside the CubeMX installation. Currently, these only seem to be distributed by ST as part of CubeMX. I did find one third-party github repo with the same data, but that wasn't updated in nearly two years). However, once you generate the pin listing and publish it (e.g. in a spreadsheet), others can of course work with it without needing CubeMX or this script anymore.

For example, you can run this script as follows:

$./stm32pinout.py /usr/local/cubemx/db/mcu/STM32F103CBUx.xml name,pin,type VBAT,1,Power PC13-TAMPER-RTC,2,I/O,GPIO,EXTI,EVENTOUT,RTC_OUT,RTC_TAMPER PC14-OSC32_IN,3,I/O,GPIO,EXTI,EVENTOUT,RCC_OSC32_IN PC15-OSC32_OUT,4,I/O,GPIO,EXTI,ADC1_EXTI15,ADC2_EXTI15,EVENTOUT,RCC_OSC32_OUT PD0-OSC_IN,5,I/O,GPIO,EXTI,RCC_OSC_IN (... more output truncated ...)  The script is not perfect yet (it does not tell you which functions correspond to which AF numbers and the ordering of functions could be improved, see TODO comments in the code), but it gets the basic job done well. You can find the script in my "scripts" repository on github. 0 comments -:- permalink -:- 13:35 / Blog / Blog Using MathJax math expressions in Markdown For this blog, I wanted to include some nicely-formatted formulas. An easy way to do so, is to use MathJax, a javascript-based math processor where you can write formulas using (among others) the often-used Tex math syntax. However, I use Markdown to write my blogposts and including formulas directly in the text can be problematic because Markdown might interpret part of my math expressions as Markdown and transform them before MathJax has had a chance to look at them. In this post, I present a customized MathJax configuration that solves this problem in a reasonable elegant way. An obvious solution is to put the match expression in Markdown code blocks (or inline code using backticks), but by default MathJax does not process these. MathJax can be reconfigured to also typeset the contents of <code> and/or <pre> elements, but since actual code will likely contain parts that look like math expressions, this will likely cause your code to be messed up. This problem was described in more detail by Yihui Xie in a blogpost, along with a solution that preprocesses the DOM to look for <code> tags that start and end with an math expression start and end marker, and if so strip away the <code> tag so that MathJax will process the expression later. Additionally, he translates any expression contained in single dollar signs (which is the traditional Tex way to specify inline math) to an expression wrapped in $$ and $$, which is the only way to specify inline math in MathJax (single dollars are disabled since they would be too likely to cause false positives). # Improved solution I considered using his solution, but it explicitly excludes code blocks (which are rendered as a <pre> tag containing a <code> tag in Markdown), and I wanted to use code blocks for centered math expressions (since that looks better without the backticks in my Markdown source). Also, I did not really like that the script modifies the DOM and has a bunch of regexes that hardcode what a math formula looks like. So I made an alternative implementation that configures MathJax to behave as intended. This is done by overriding the normal automatic typesetting in the pageReady function and instead explicitly typesetting all code tags that contain exactly one math expression. Unlike the solution by Yihui Xie, this: • Lets MathJax decide what is and is not a math expression. This means that it will also work for other MathJax input plugins, or with non-standard tex input configuration. • Only typesets string-based input types (e.g. TeX but not MathML), since I did not try to figure out how the node-based inputs work. • Does not typeset anything except for these selected <code> elements (e.g. no formulas in normal text), because the default typesetting is replaced. • Also typesets formulas in <code> elements inside <pre> elements (but this can be easily changed using the parent tag check from Yihui Xie's code). • Enables typesetting of single-dollar inline math expressions by changing MathJax config instead of modifying the delimeters in the DOM. This will not produce false positive matches in regular text, since typesetting is only done on selected code tags anyway. • Runs from the MathJax pageReady event, so the script does not have to be at the end of the HTML page. You can find the MathJax configuration for this inline at the end of this post. To use it, just put the script tag in your HTML before the MathJax script tag (or see the MathJax docs for other ways). # Examples To use it, just use the normal tex math syntax (using single or double $ signs) inside a code block (using backticks or an indented block) in any combination. Typically, you would use single $ delimeters together with backticks for inline math. You'll have to make sure that the code block contains exactly a single MathJax expression (and maybe some whitespace), but nothing else. E.g. this Markdown: Formulas *can* be inline: $z = x + y$. Renders as: Formulas can be inline: $z = x + y$. The double $$ delimeter produces a centered math expression. This works within backticks (like Yihui shows) but I think it looks better in the Markdown if you use an indented block (which Yihui's code does not support). So for example this Markdown (note the indent): $$a^2 + b^2 = c^2$$ Renders as: $$a^2 + b^2 = c^2$$ Then you can also use more complex, multiline expressions. This indented block of Markdown: $$ \begin{vmatrix} a & b\\ c & d \end{vmatrix} =ad-bc $$ Renders as: $$ \begin{vmatrix} a & b\\ c & d \end{vmatrix} =ad-bc $$ Note that to get Markdown to display the above example blocks, i.e. code blocks that start and with $$, without having MathJax process them, I used some literal HTML in my Markdown source. For example, in my blog's markdown source, the first block above literall looks like this: <pre><code><span></span> $$a^2 + b^2 = c^2$$</code></pre>  Markdown leaves the HTML tags alone, and the empty span ensures that the script below does not process the contents of the code block (since it only processes code blocks where the full contents of the block are valid MathJax code). # The code So, here is the script that I am now using on this blog: <script type="text/javascript"> MathJax = { options: { // Remove <code> tags from the blacklist. Even though we pass an // explicit list of elements to process, this blacklist is still // applied. skipHtmlTags: { '[-]': ['code'] }, }, tex: { // By default, only \( is enabled for inline math, to prevent false // positives. Since we already only process code blocks that contain // exactly one math expression and nothing else, it is also fine to // use the nicer$...$construct for inline math. inlineMath: { '[+]': [['$', '$']] }, }, startup: { // This is called on page ready and replaces the default MathJax // "typeset entire document" code. pageReady: function() { var codes = document.getElementsByTagName('code'); var to_typeset = []; for (var i = 0; i < codes.length; i++) { var code = codes[i]; // Only allow code elements that just contain text, no subelements if (code.childElementCount === 0) { var text = code.textContent.trim(); inputs = MathJax.startup.getInputJax(); // For each of the configured input processors, see if the // text contains a single math expression that encompasses the // entire text. If so, typeset it. for (var j = 0; j < inputs.length; j++) { // Only use string input processors (e.g. tex, as opposed to // node processors e.g. mml that are more tricky to use). if (inputs[j].processStrings) { matches = inputs[j].findMath([text]); if (matches.length == 1 && matches[0].start.n == 0 && matches[0].end.n == text.length) { // Trim off any trailing newline, which otherwise stays around, adding empty visual space below code.textContent = text; to_typeset.push(code); code.classList.add("math"); if (code.parentNode.tagName == "PRE") code.parentNode.classList.add("math"); break; } } } } } // Code blocks to replace are collected and then typeset in one go, asynchronously in the background MathJax.typesetPromise(to_typeset); }, }, }; </script>  Update 2020-08-05: Script updated to run typesetting only once, and use typesetPromise to run it asynchronously, as suggested by Raymond Zhao in the comments below. Update 2020-08-20: Added some Markdown examples (the same ones Yihui Xie used), as suggested by Troy. Update 2021-09-03: Clarified how the script decides which code blocks to process and which to leave alone. Comments Raymond Zhao wrote at 2020-07-29 22:37 Hey, this script works great! Just one thing: performance isn't the greatest. I noticed that upon every call to MathJax.typeset, MathJax renders the whole document. It's meant to be passed an array of all the elements, not called individually. So what I did was I put all of the code elements into an array, and then called MathJax.typesetPromise (better than just typeset) on that array at the end. This runs much faster, especially with lots of LaTeX expressions on one page. Matthijs Kooijman wrote at 2020-08-05 08:28 Hey Raymond, excellent suggestion. I've updated the script to make these changes, works perfect. Thanks! Troy wrote at 2020-08-19 20:53 What a great article! Congratulations :) Can you please add a typical math snippet from one of your .md files? (Maybe the same as the one Yihui Xie uses in his post.) I would like to see how you handle inline/display math in your markdown. Matthijs Kooijman wrote at 2020-08-20 16:47 Hey Troy, good point, examples would really clarify the post. I've added some (the ones from Yihui Xie indeed) that show how to use this from Markdown. Hope this helps! Xiao wrote at 2021-09-03 04:09 Hi, this code looks pretty great! One thing I'm not sure about is how do you differentiate latex code block and normal code block so that they won't be rendered to the same style? Matthijs Kooijman wrote at 2021-09-03 13:09 Hi Xiao, thanks for your comment. I'm not sure I understand your question completely, but what happens is that both the math/latex block and a regular code block are processed by markdown into a <pre><code>...</code></pre> block. Then the script shown above picks out all <code> blocks, and passes the content of each to MathJax for processing. Normally MathJax finds any valid math expression (delimited by e.g. $$ or ) and processes it, but my script has some extra checks to only apply MathJax processing if the entire <code> block is a single MathJax block (in other words, if it starts and ends with $$ or $).

This means that regular code blocks will not be MathJax processed and stay regular code blocks. One exception is when a code block starts and ends with e.g.  but you still do not want it processed (like the Markdown-version of the examples I show above), but I applied a little hack with literal HTML tags and an empty <span> for that (see above, I've updated the post to show how I did this).

Or maybe your question is more about actually styling regular code blocks vs math blocks? For that, the script adds a math class to the <code> and <pre> tags, which I then use in my CSS to slightly modify the styling (just remove the grey background for math blocks, all other styling is handled by Mathjax already it seems).

Name:
URL:
Comment:

Comment can contain markdown formatting

Making an old paint-mixing terminal keyboard work with Linux

Or: Forcing Linux to use the USB HID driver for a non-standards-compliant USB keyboard.

For an interactive art installation by the Spullenmannen, a friend asked me to have a look at an old paint mixing terminal that he wanted to use. The terminal is essentially a small computer, in a nice industrial-looking sealed casing, with a (touch?) screen, keyboard and touchpad. It was by "Lacour" and I think has been used to control paint mixing machines.

They had already gotten Linux running on the system, but could not get the keyboard to work and asked me if I could have a look.

The keyboard did work in the BIOS and grub (which also uses the BIOS), so we know it worked. Also, the BIOS seemed pretty standard, so it was unlikely that it used some very standard protocol or driver and I guessed that this was a matter of telling Linux which driver to use and/or where to find the device.

Inside the machine, it seemed the keyboard and touchpad were separate devices, controlled by some off-the-shelf microcontroller chip (probably with some custom software inside). These devices were connected to the main motherboard using a standard 10-pin expansion header intended for external USB ports, so it seemed likely that these devices were USB ports.

## Closer look at the USB devices

And indeed, looking through lsusb output I noticed two unkown devices in the list:

# lsusb
Bus 002 Device 003: ID ffff:0001
Bus 002 Device 002: ID 0000:0003
(...)


These have USB vendor ids of 0x0000 and 0xffff, which I'm pretty sure are not official USB-consortium-assigned identifiers (probably invalid or reserved even), so perhaps that's why Linux is not using these properly?

Running lsusb with the --tree option allows seeing the physical port structure, but also shows which drivers are bound to which interfaces:

# lsusb --tree
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
|__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port 2: Dev 3, If 0, Class=Human Interface Device, Driver=, 12M
(...)


This shows that the keyboard (Dev 3) indeed has no driver, but the touchpad (Dev 2) is already bound to usbhid. And indeed, runnig cat /dev/input/mice and then moving over the touchpad shows that some output is being generated, so the touchpad was already working.

Looking at the detailed USB descriptors for these devices, shows that they are both advertised as supporting the HID interface (Human Interface Device), which is the default protocol for keyboards and mice nowadays:

# lsusb -d ffff:0001 -v

Bus 002 Device 003: ID ffff:0001
Device Descriptor:
bLength                18
bDescriptorType         1
bcdUSB               2.00
bDeviceClass          255 Vendor Specific Class
bDeviceSubClass         0
bDeviceProtocol         0
bMaxPacketSize0         8
idVendor           0xffff
idProduct          0x0001
bcdDevice            0.01
iManufacturer           1 Lacour Electronique
iProduct                2 ColorKeyboard
iSerial                 3 SE.010.H
(...)
Interface Descriptor:
bLength                 9
bDescriptorType         4
bInterfaceNumber        0
bAlternateSetting       0
bNumEndpoints           1
bInterfaceClass         3 Human Interface Device
bInterfaceSubClass      1 Boot Interface Subclass
bInterfaceProtocol      1 Keyboard
(...)

# lsusb -d 0000:00003 -v
Bus 002 Device 002: ID 0000:0003
Device Descriptor:
bLength                18
bDescriptorType         1
bcdUSB               2.00
bDeviceClass            0
bDeviceSubClass         0
bDeviceProtocol         0
bMaxPacketSize0         8
idVendor           0x0000
idProduct          0x0003
bcdDevice            0.00
iManufacturer           1 Lacour Electronique
iSerial                 3 V2.0
(...)
Interface Descriptor:
bLength                 9
bDescriptorType         4
bInterfaceNumber        0
bAlternateSetting       0
bNumEndpoints           1
bInterfaceClass         3 Human Interface Device
bInterfaceSubClass      1 Boot Interface Subclass
bInterfaceProtocol      2 Mouse
(...)


So, that should make it easy to get the keyboard working: Just make sure the usbhid driver is bound to it and that driver will be able to figure out what to do based on these descriptors. However, apparently something is preventing this binding from happening by default.

Looking back at the USB descriptors above, one interesting difference is that the keyboard has bDeviceClass set to "Vendor specific", whereas the touchpad has it set to 0, which means "Look at interface descriptors. So that seems the most likely reason why the keyboard is not working, since "Vendor Specific" essentially means that the device might not adhere to any of the standard USB protocols and the kernel will probably not start using this device unless it knows what kind of device it is based on the USB vendor and product id (but since those are invalid, these are unlikely to be listed in the kernel).

## Binding to usbhid

So, we need to bind the keyboard to the usbhid driver. I know of two ways to do so, both through sysfs.

You can assign extra USB vid/pid pairs to a driver through the new_id sysfs file. In this case, this did not work somehow:

# echo ffff:0001 > /sys/bus/usb/drivers/usbhid/new_id
bash: echo: write error: Invalid argument


At this point, I should have stopped and looked up the right syntax used for new_id, since this was actually the right approach, but I was using the wrong syntax (see below). Instead, I tried some other stuff first.

The second way to bind a driver is to specify a specific device, identified by its sysfs identifier:

# echo 2-2:1.0 > /sys/bus/usb/drivers/usbhid/bind
bash: echo: write error: No such device


The device identifier used here (2-2:1.0) is directory name below /sys/bus/usb/devices and is, I think, built like <bus>-<port>:1.<interface> (where 1 might the configuration?). You can find this info in the lsusb --tree output:

/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=uhci_hcd/2p, 12M
|__ Port 2: Dev 3, If 0, Class=Human Interface Device, Driver=, 12M


I knew that the syntax I used for the device id was correct, since I could use it to unbind and rebind the usbhid module from the touchpad. I suspect that there is some probe mechanism in the usbhid driver that runs after you bind the driver which tests the device to see if it is compatible, and that mechanism rejects it.

## How does the kernel handle this?

As I usually do when I cannot get something to work, I dive into the source code. I knew that Linux device/driver association usually works with a driver-specific matching table (that tells the underlying subsystem, such as the usb subsystem in this case, which devices can be handled by a driver) or probe function (which is a bit of driver-specific code that can be called by the kernel to probe whether a device is compatible with a driver). There is also configuration based on Device Tree, but AFAIK this is only used in embedded platforms, not on x86.

Looking at the usbhid_probe() and usb_kbd_probe() functions, I did not see any conditions that would not be fulfilled by this particular USB device.

The match table for usbhid also only matches the interface class and not the device class. The same goes for the module.alias file, which I read might also be involved (though I am not sure how):

# cat /lib/modules/*/modules.alias|grep usbhid
alias usb:v*p*d*dc*dsc*dp*ic03isc*ip*in* usbhid


So, the failing check must be at a lower level, probably in the usb subsystem.

Digging a bit further, I found the usb_match_one_id_intf() function, which is the core of matching USB drivers to USB device interfaces. And indeed, it says:

/* The interface class, subclass, protocol and number should never be
* checked for a match if the device class is Vendor Specific,
* unless the match record specifies the Vendor ID. */


So, the entry in the usbhid table is being ignored since it matches only the interface, while the device class is "Vendor Specific". But how to fix this?

A little but upwards in the call stack, is a bit of code that matches a driver to an usb device or interface. This has two sources: The static table from the driver source code, and a dynamic table that can be filled with (hey, we know this part!) the new_id file in sysfs. So that suggests that if we can get an entry into this dynamic table, that matches the vendor id, it should work even with a "Vendor Specific" device class.

## Back to new_id

Looking further at how this dynamic table is filled, I found the code that handles writes to new_id, and it parses it input like this:

fields = sscanf(buf, "%x %x %x %x %x", &idVendor, &idProduct, &bInterfaceClass, &refVendor, &refProduct);


In other words, it expects space separated values, rather than just a colon separated vidpid pair. Reading on in the code shows that only the first two (vid/pid) are required, the rest is optional. Trying that actually works right away:

# echo ffff 0001 > /sys/bus/usb/drivers/usbhid/new_id
# dmesg
(...)
[ 5011.088134] input: Lacour Electronique ColorKeyboard as /devices/pci0000:00/0000:00:1d.1/usb2/2-2/2-2:1.0/0003:FFFF:0001.0006/input/input16
[ 5011.150265] hid-generic 0003:FFFF:0001.0006: input,hidraw3: USB HID v1.11 Keyboard [Lacour Electronique ColorKeyboard] on usb-0000:00:1d.1-2/input0


After this, I found I can now use the unbind file to unbind the usbhid driver again, and bind to rebind it. So it seems that using bind indeed still goes through the probe/match code, which previously failed but with the entry in the dynamic table, works.

## Making this persistent

So nice that it works, but this dynamic table will be lost on a reboot. How to make it persistent? I can just drop this particular line into the /etc/rc.local startup script, but that does not feel so elegant (it will probably work, since it only needs the usbhid module to be loaded and should work even when the USB device is not known/enumerated yet).

However, as suggested by this post, you can also use udev to run this command at the moment the USB devices is "added" (i.e. enumerated by the kernel). To do so, simply drop a file in /etc/udev/rules.d:

$cat /etc/udev/rules.d/99-keyboard.rules # Integrated USB keyboard has invalid USB VIDPID and also has bDeviceClass=255, # causing the hid driver to ignore it. This writes to sysfs to let the usbhid # driver match the device on USB VIDPID, which overrides the bDeviceClass ignore. # See also: # https://unix.stackexchange.com/a/165845 # https://github.com/torvalds/linux/blob/bf3bd966dfd7d9582f50e9bd08b15922197cd277/drivers/usb/core/driver.c#L647-L656 # https://github.com/torvalds/linux/blob/3039fadf2bfdc104dc963820c305778c7c1a6229/drivers/hid/usbhid/hid-core.c#L1619-L1623 ACTION=="add", ATTRS{idVendor}=="ffff", ATTRS{idProduct}=="0001", RUN+="/bin/sh -c 'echo ffff 0001 > /sys/bus/usb/drivers/usbhid/new_id'"  And with that, the keyboard works automatically at startup. Nice :-) 0 comments -:- permalink -:- 18:52 Reliable long-distance Arduino communication: RS485 & MODBUS? For a customer, I've been looking at RS-485 and MODBUS, two related protocols for transmitting data over longer distances, and the various Arduino libraries that exist to work with them. They have been working on a project consisting of multiple Arduino boards that have to talk to each other to synchronize their state. Until now, they have been using I²C, but found that this protocol is quite susceptible to noise when used over longer distances (1-2m here). Combined with some limitations in the AVR hardware and a lack of error handling in the Arduino library that can cause the software to lock up in the face of noise (see also this issue report), makes I²C a bad choice in such environments. So, I needed something more reliable. This should be a solved problem, right? # RS-485 A commonly used alternative, also in many industrial settings, are RS-485 connections. This is essentially an asynchronous serial connection (e.g. like an UART or RS-232 serial port), except that it uses differential signalling and is a multipoint bus. Differential signalling means two inverted copies of the same signal are sent over two impedance-balanced wires, which allows the receiver to cleverly subtract both signals to cancel out noise (this is also what ethernet and professional audio signal does). Multipoint means that there can be more than two devices on the same pair of wires, provided that they do not transmit at the same time. When combined with shielded and twisted wire, this should produce a very reliable connection over long lengths (up to 1000m should be possible). However, RS-485 by itself is not everything: It just specifies the physical layer (the electrical connections, or how to send data), but does not specify any format for the data, nor any way to prevent multiple devices from talking at the same time. For this, you need a data link or arbitration protocol running on top of RS-485. # MODBUS A quick look around shows that MODBUS is very commonly used protocol on top of RS-485 (but also TCP/IP or other links) that handles the data link layer (how to send data and when to send). This part is simple: There is a single master that initiates all communication, and multiple slaves that only reply when asked something. Each slave has an address (that must be configured manually beforehand), the master needs no address. MODBUS also specifies a simple protocol that can be used to read and write addressed bits ("Coils" and "Inputs") and addressed registers, which would be pretty perfect for the usecase I'm looking at now. # Finding an Arduino library So, I have some RS-485 transceivers (which translate regular UART to RS-485) and just need some Arduino library to handle the MODBUS protocol for me. A quick Google search shows there are quite a few of them (never a good sign). A closer look shows that none of them are really good... There are some more detailed notes per library below, but overall I see the following problems: • Most libraries are very limited in what serial ports they can use. Some are hardcoded to a single serial port, some support running on arbitrary HardwareSerial instances (and sometimes also SoftwareSerial instances, but only two librares actually supports running on arbitrary Stream instances (while this is pretty much the usecase that Stream was introduced for). • All libraries handle writes and reads to coils and registers automatically by updating the relevant memory locations, which is nice. However, none of them actually support notifying the sketch of such reads and writes (one has a return value that indicates that something was read or written, but no details), which means that the sketch should continuously check all register values and update them library. It also means that the number of registers/coils is limited by the available RAM, you cannot have virtual registers (e.g. writes and reads that are handled by a function rather than a bit of RAM). • A lot of them are either always blocking in the master, or require manually parsing replies (or both). # Writing an Arduino library? Ideally, I would like to see a library: - That can be configured using a Stream instance and an (optional) tx enable pin. - Has a separation between the MODBUS application protocol and the RS-485-specific datalink protocol, so it can be extended to other transports (e.g. TCP/IP) as well. - Where the master has both synchronous (blocking) and asynchronous request methods. The xbee-arduino library, which also does serial request-response handling would probably serve as a good example of how to combine these in a powerful API. - Where the slave can have multiple areas defined (e.g. a block of 16 registers starting at address 0x10). Each area can have some memory allocated that will be read or written directly, or a callback function to do the reading or writing. In both cases, a callback that can be called after something was read or writen (passing the area pointer and address or something) can be configured too. Areas should probably be allowed to overlap, which also allows having a "fallback" (virtual) area that covers all other addresses. These areas should be modeled as objects that are directly accessible to the sketch, so the sketch can read and write the data without having to do linked-list lookups and without needing to know the area-to-adress mapping. - That supports sending and receiving raw messages as well (to support custom function codes). - That does not do any heap allocation (or at least allows running with static allocations only). This can typically be done using static (global) variables allocated by the sketch that are connected as a linked list in the library. I suspect that given my requirements, this would mean starting a new library from scratch (using an existing library as a starting point would always mean significant redesigning, which is probably more work than its worth). Maybe some parts (e.g. specific things like packet formatting and parsing) can be reused, though. Of course, I do not really have time for such an endeavor and the customer for which I started looking at this certainly has no budget in this project for such an investment. This means I will probably end up improvising with the MCCI library, or use some completely different or custom protocol instead of MODBUS (though the Arduino library offerings in this area also seem limited...). Maybe CANBus? However, if you also find yourself in the same situation, maybe my above suggestions can serve as inspiration (and if you need this library and have some budget to get it written, feel free to contact me). # Existing libraries So, here's the list of libraries I found. ## https://github.com/arduino-libraries/ArduinoModbus • Official library from Arduino. • Master and slave. • Uses the RS485 library to communicate, but does not offer any way to pass a custom RS485 instance, so it is effectively hardcoded to a specific serial port. • Offers only single value reads and writes. • Slave stores value internally and reads/writes directly from those, without any callback or way to detect that communication has happened. ## https://github.com/4-20ma/ModbusMaster • Master-only library. • Latest commit in 2016. • Supports any serial port through Stream objects. • Supports idle/pre/post-transmission callbacks (no parameters), used to enable/disable the transceiver. • Supports single and multiple read/writes. • Replies are returned in a (somewhat preprocessed) buffer, to be further processed by the caller. ## https://github.com/andresarmento/modbus-arduino • Slave-only library. • Last commit in 2015. • Supports single and multiple read/writes. • Split into generic ModBus library along with extra transport-specific libraries (TCP, serial, etc.). • Supports passing HardwareSerial pointers and (with a macro modification to the library) SoftwareSerial pointers (but uses a Stream pointer internally already). • Slave stores values in a linked list (heap-allocated), values are written through write methods (linked list elements are not exposed directly, which is a pity). • Slave reads/writes directly from internal linked list, without any callback or way to detect that communication has happened. • https://github.com/vermut/arduino-ModbusSerial is a fork that has some Due-specific fixes. ## https://github.com/lucasso/ModbusRTUSlaveArduino • Fork of https://github.com/Geabong/ModbusRTUSlaveArduino (6 additional commits). • Slave-only-library. • Last commit in 2018. • Supports passing HardwareSerial pointers. • Slave stores values external to the library in user-allocate arrays. These arrays are passed to the library as "areas" with arbitrary starting addresses, which are kept in the library in a linked list (heap-allocated). • Slave reads/writes directly from internal linked list, without any callback or way to detect that communication has happened. ## https://github.com/mcci-catena/Modbus-for-Arduino • Master and slave. • Last commit in 2019. • Fork of old (2016) version of https://github.com/smarmengol/Modbus-Master-Slave-for-Arduino with significant additional development. • Supports passing arbitrary serial (or similar) objects using a templated class. • Slave stores values external to the library in a single array (so all requests index the same data, either word or bit-indexed), which is passed to the poll() function. • On Master, sketch must create requests somewhat manually (into a struct, which is encoded to a byte buffer automatically), and replies returns raw data buffer on requests. Requests and replies are non-blocking, so polling for replies is somewhat manual. ## https://github.com/angeloc/simplemodbusng • Master and slave. • Last commit in 2019. • Hardcodes Serial object, supports SoftwareSerial in slave through duplicated library. • Supports single and multiple read/writes of holding registers only (no coils or input registers). • Slave stores values external to the library in a single array, which is passed to the update function. ## https://github.com/smarmengol/Modbus-Master-Slave-for-Arduino • Master and slave. • Last commit in 2020. • Supports any serial port through Stream objects. • Slave stores values external to the library in a single array (so all requests index the same data, either word or bit-indexed), which is passed to the poll() function. • On Master, sketch must create requests somewhat manually (into a struct, which is encoded to a byte buffer automatically), and replies returns raw data buffer on requests. Requests and replies are non-blocking, so polling for replies is somewhat manual. ## https://github.com/asukiaaa/arduino-rs485 • Unlike what the name suggests, this actually implements ModBus • Master and slave. • Started very recently (October 2020), so by the time you read this, maybe things have already improved. • Very simple library, just handles modbus framing, the contents of the modbus packets must be generated and parsed manually. • Slave only works if you know the type and length of queries that will be received. • Supports working on HardwareSerial objects. ## https://gitlab.com/creator-makerspace/rs485-nodeproto • This is not a MODBUS library, but a very thin layer on top of RS485 that does collision avoidance and detection that can be used to implement a multi-master system. • Last commit in 2016, repository archived. • This one is notable because it gets the Stream-based configuration right and seems well-written. It does not implement MODBUS or a similarly high-level protocol, though. ## https://github.com/MichaelJonker/HardwareSerialRS485 • Also not MODBUS, but also a collision avoidance/detection scheme on top of RS485 for multi-master bus. • Last commit in 2015. • Replaces HardwareSerial rather than working on top, requiring a customized boards.txt. ## https://www.airspayce.com/mikem/arduino/RadioHead/ • This is not a MODBUS library, but a communication library for data communication over radio. It also supports serial connections (and is thus an easy way to get framing, checksumming, retransmissions and routing over serial). • Seems to only support point-to-point connections, lacking an internal way to disable the RS485 driver when not transmitting (but maybe it can be hacked internally). Update 2020-06-26: Added smarmengol/Modbus-Master-Slave-for-Arduino to the list Update 2020-10-07: Added asukiaaa/arduino-rs485 to the list 2 comments -:- permalink -:- 12:42 Recovering data from a failing hard disk with HFS+ Recently, a customer asked me te have a look at an external hard disk he was using with his Macbook. It would show up a file listing just fine, but when trying to open actual files, it would start failing. Of course there was no backup, but the files were very precious... This started out as a small question, but ended up in an adventure that spanned a few days and took me deep into the ddrescue recovery tool, through the HFS+ filesystem and past USB power port control. I learned a lot, discovered some interesting things and produced a pile of scripts that might be helpful to others. Since the journey seems interesting as well as the end result, I will describe the steps I took here, "ter leering ende vermaeck". I started out confirming the original problem. Plugging in the disk to my Linux laptop, it showed up as expected in dmesg. I could mount the disk without problems, see the directory listing and even open up an image file stored on the disk. Opening other files didn't seem to work. ## SMART As you do with bad disks, you try to get their SMART data. Since smartctl did not support this particular USB bridge (and I wasn't game to try random settings to see if it worked on a failing disk), I gave up on SMART initially. I later opened up the case to bypassing the USB-to-SATA controller (in case the problem was there, and to make SMART work), but found that this particular hard drive had the converter built into the drive itself (so the USB part was directly attached to the drive). Even later, I found out some page online (I have not saved the link) that showed the disk was indeed supported by smartctl and showed the option to pass to smartctl -d to make it work. SMART confirmed that the disk was indeed failing, based on the number of reallocated sectors (2805). ## Fast-then-slow copying Since opening up files didn't work so well, I prepared to make a sector-by-sector copy of the partition on the disk, using ddrescue. This tool has a good approach to salvaging data, where it tries to copy off as much data as possible quickly, skipping data when it comes to a bad area on disk. Since reading a bad sector on a disk often takes a lot of time (before returning failure), ddrescue tries to steer clear of these bad areas and focus on the good parts first. Later, it returns to these bad areas and, in a few passes, tries to get out as much data as possible. At first, copying data seemed to work well, giving a decent read speed of some 70MB/s as well. But very quickly the speed dropped terribly and I suspected the disk ran into some bad sector and kept struggling with that. I reset the disk (by unplugging it) and did a few more attempts and quickly discovered something weird: The disk would work just fine after plugging it in, but after a while the speed would plummet tot a whopping 64Kbyte/s or less. This happened everytime. Even more, it happened pretty much exactly 30 seconds after I started copying data, regardless of what part of the disk I copied data from. So I quickly wrote a one-liner script that would start ddrescue, kill it after 45 seconds, wait for the USB device to disappear and reappear, and then start over again. So I spent some time replugging the USB cable about once every minute, so I could at least back up some data while I was investigating other stuff. Since the speed was originally 70MB/s, I could pull a few GB worth of data every time. Since it was a 2000GB disk, I "only" had to plug the USB connector around a thousand times. Not entirely infeasible, but not quite comfortable or efficient either. So I investigated ways to further automate this process: Using hdparm to spin down or shutdown the disk, use USB powersaving to let the disk reset itself, disable the USB subsystem completely, but nothing seemed to increase the speed again other than completely powering down the disk by removing the USB plug. While I was trying these things, the speed during those first 30 seconds dropped, even below 10MB/s at some point. At that point, I could salvage around 200MB with each power cycle and was looking at pulling the USB plug around 10,000 times: no way that would be happening manually. # Automatically pulling the plug I resolved to further automate this unplugging and planned using an Arduino (or perhaps the GPIO of a Raspberry Pi) and something like a relay or transistor to interrupt the power line to the hard disk to "unplug" the hard disk. For that, I needed my Current measuring board to easily interrupt the USB power lines, which I had to bring from home. In the meanwhile, I found uhubctl, a small tool that uses low-level USB commands to individually control the port power on some hubs. Most hubs don't support this (or advertise support, but simply don't have the electronics to actually switch power, apparently), but I noticed that the newer raspberry pi's supported this (for port 2 only, but that would be enough). Coming to the office the next day, I set up a raspberry pi and tried uhubctl. It did indeed toggle USB power, but the toggle would affect all USB ports at the same time, rather than just port 2. So I could switch power to the faulty drive, but that would also cut power to the good drive that I was storing the recovered data on, and I was not quite prepared to give the good drive 10,000 powercycles. The next plan was to connect the recovery drive through the network, rather than directly to the Raspberry Pi. On Linux, setting up a network drive using SSHFS is easy, so that worked in a few minutes. However, somehow ddrescue insisted it could not write to the destination file and logfile, citing permission errors (but the permissions seemed just fine). I suspect it might be trying to mmap or something else that would not work across SSHFS.... The next plan was to find a powered hub - so the recovery drive could stay powered while the failing drive was powercycled. I rummaged around the office looking for USB hubs, and eventually came up with some USB-based docking station that was externally powered. When connecting it, I tried the uhubctl tool on it, and found that one of its six ports actually supported powertoggling. So I connected the failing drive to that port, and prepared to start the backup. When trying to mount the recovery drive, I discovered that a Raspberry pi only supports filesystems up to 2TB (probably because it uses a 32-bit architecture). My recovery drive was 3TB, so that would not work on the Pi. Time for a new plan: do the recovery from a regular PC. I already had one ready that I used the previous day, but now I needed to boot a proper Linux on it (previously I used a minimal Linux image from UBCD, but that didn't have a compiler installed to allow using uhubctl). So I downloaded a Debian live image (over a mobile connection - we were still waiting for fiber to be connected) and 1.8GB and 40 minutes later, I finally had a working setup. The run.sh script I used to run the backup basically does this: 1. Run ddrescue to pull of data 2. After 35 seconds, kill ddrescue 3. Tell the disk to sleep, so it can spindown gracefully before cutting the power. 4. Tell the disk to sleep again, since sometimes it doesn't work the first time. 5. Cycle the USB power on the port 6. Wait for the disk to re-appear 7. Repeat from 1. By now, the speed of recovery had been fluctuating a bit, but was between 10MB/s and 30MB/s. That meant I was looking at some thousands up to ten thousands powercycles and a few days up to a week to backup the complete disk (and more if the speed would drop further). # Selectively backing up Realizing that there would be a fair chance that the disk would indeed get slower, or even die completely due to all these power cycles, I had to assume I could not backup the complete disk. Since I was making the backup sector by sector using ddrescue, this meant a risk of not getting any meaningful data at all. Files are typically fragmented, so can be stored anywhere on the disk, possible spread over multiple areas as well. If you just start copying at the start of the disk, but do not make it to the end, you will have backed some data but the data could belong to all kinds of different files. That means that you might have some files in a directory, but not others. Also, a lot of files might only be partially recovered, the missing parts being read as zeroes. Finally, you will also end up backing up all unused space on the disk, which is rather pointless. To prevent this, I had to figure out where all kinds of stuff was stored on the disk. ## The catalog file The first step was to make sure the backup file could be mounted (using a loopback device). On my first attempt, I got an error about an invalid catalog. I looked around for some documentation about the HFS+ filesystems, and found a nice introduction by infosecaddicts.com and a more detailed description at dubeiko.com. The catalog is apparently the place where the directory structure, filenames, and other metadata are stored in a single place. This catalog is not in a fixed location (since its size can vary), but its location is noted in the so-called volume header, a fixed-size datastructure located at 1024 bytes from the start of the partition. More details (including easier to read offsets within the volume header) are provided in this example. Looking at the volume header inside the backup, gives me: root@debian:/mnt/recover/WD backup# dd if=backup.img bs=1024 skip=1 count=1 2> /dev/null | hd 00000000 48 2b 00 04 80 00 20 00 48 46 53 4a 00 00 3a 37 |H+.... .HFSJ..:7| 00000010 d4 49 7e 38 d8 05 f9 64 00 00 00 00 d4 49 1b c8 |.I~8...d.....I..| 00000020 00 01 24 7c 00 00 4a 36 00 00 10 00 1d 1a a8 f6 |..$|..J6........|
^^^^^^^^^^^ Block size: 4096 bytes
00000030  0e c6 f7 99 14 cd 63 da  00 01 00 00 00 01 00 00  |......c.........|
00000040  00 02 ed 79 00 6e 11 d4  00 00 00 00 00 00 00 01  |...y.n..........|
00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000060  00 00 00 00 00 00 00 00  a7 f6 0c 33 80 0e fa 67  |...........3...g|
00000070  00 00 00 00 03 a3 60 00  03 a3 60 00 00 00 3a 36  |............:6|
00000080  00 00 00 01 00 00 3a 36  00 00 00 00 00 00 00 00  |......:6........|
00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
000000c0  00 00 00 00 00 e0 00 00  00 e0 00 00 00 00 0e 00  |................|
000000d0  00 00 d2 38 00 00 0e 00  00 00 00 00 00 00 00 00  |...8............|
000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000110  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |............&.|
00000120  00 0d 82 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000160  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |............&.|
00000170  00 00 e0 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
00000180  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
*
00000400

00000110  00 00 00 00 12 60 00 00  12 60 00 00 00 01 26 00  |............&.|
^^^^^^^^^^^^^^^^^^^^^^^ Catalog size, in bytes: 0x12600000

00000120  00 0d 82 38 00 01 26 00  00 00 00 00 00 00 00 00  |...8..&.........|
^^^^^^^^^^^ First extent size, in 4k blocks: 0x12600
^^^^^^^^^^^ First extent offset, in 4k blocks: 0xd8238
00000130  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|


I have annotated the parts that refer to the catalog. The content of the catalog (just like all other files), are stored in "extents". An extent is a single, contiguous block of storage, that contains (a part of) the content of a file. Each file can consist of multiple extents, to prevent having to move file content around each time things change (e.g. to allow fragmentation).

In this case, the catalog is stored only in a single extent (since the subsequent extent descriptors have only zeroes). All extent offsets and sizes are in blocks of 4k byte, so this extent lives at 0xd8238 * 4k = byte 3626205184 (~3.4G) and is 0x12600 * 4k = 294MiB long. So I backed up the catalog by adding -i 3626205184 to ddrescue, making it skip ahead to the location of the catalog (and then power cycled a few times until it copied the needed 294MiB).

After backup the allocation file, I could mount the image file just fine, and navigate the directory structure. Trying to open files would mostly fail, since the most files would only read zeroes now.

I did the same for the allocation file (which tracks free blocks), the extents file (which tracks the content of files that are more fragmented and whose extent list does not fit in the catalog) and the attributes file (not sure what that is, but for good measure).

Afterwards, I wanted to continue copying from where I previously left off, so I tried passing -i 0 to ddrescue, but it seems this can only be used to skip ahead, not back. In the end, I just edited the logfile, which is just a textfile, to set the current position to 0. ddrescue is smart enough to skip over blocks it already backed up (or marked as failed), so it then continued where it previously left off.

## Where are my files?

With the catalog backed up, I needed to read it to figure out what file were stored where, so I could make sure the most important files were backed up first, followed by all other files, skipping any unused space on the disk.

I considered and tried some tools for reading the catalog directly, but none of them seemed workable. I looked at hfssh from hfsutils (which crashed), hfsdebug (which is discontinued and no longer available for download), hfsinspect (which calsl itself "quite buggy").

Instead, I found the filefrag commandline utility that uses a Linux filesystem syscall to figure out where the contents of a particular file is stored on disk. To coax the output of that tool into a list of extents usable by ddrescue, I wrote a oneliner shell script called list-extents.sh:

sudo filefrag -e "$@" | grep '^ ' |sed 's/\.\./:/g' | awk -F: '{print$4, $6}'  Given any number of filenames, it produces a list of (start, size) pairs for each extent in the listed files (in 4k blocks, which is the Linux VFS native block size). With the backup image loopback-mounted at /mnt/backup, I could then generate an extent list for a given subdirectory using: sudo find /mnt/backup/SomeDir -type f -print0 | xargs -0 -n 100 ./list-extents.sh > SomeDir.list  To turn this plain list of extents into a logfile usable by ddrescue, I wrote another small script called post-process.sh, that adds the appropriate header, converts from 4k blocks to 512-byte sectors, converts to hexadecimal and sets the right device size (so if you want to use this script, edit it with the right size). It is called simply like this: ./post-process.sh SomeDir.list  This produces two new files: SomeDir.list.done, in which all of the selected files are marked as "finished" (and all other blocks as "non-tried") and SomeDir.list.notdone which is reversed (all selected files are marked as "non-tried" and all others are marked as "finished"). ## Backing up specific files Armed with a couple of these logfiles for the most important files on the disk and one for all files on the disk, I used the ddrescuelog tool to tell ddrescue what stuff to work on first. The basic idea is to mark everything that is not important as "finished", so ddrescue will skip over it and only work on the important files. ddrescuelog backup.logfile --or-mapfile SomeDir.list.notdone | tee todo.original > todo  This uses the ddrescuelog --or-mapfile option, which takes my existing logfile (backup.logfile) and marks all bytes as finished that are marked as finished in the second file (SomeDir.list.notdone). IOW, it marks all bytes that are not part of SomeDir as done. This generates two copies (todo and todo.original) of the result, I'll explain why in a minute. With the generated todo file, we can let ddrescue run (though I used the run.sh script instead): # Then run on the todo file sudo ddrescue -d /dev/sdd2 backup.img todo -v -v  Since the generation of the todo file effectively threw away information (we can not longer see from the todo file what parts of the non-important sectors were already copied, or had errors, etc.), we need to keep the original backup.logfile around too. Using the todo.original file, we can figure out what the last run did, and update backup.logfile accordingly: ddrescuelog backup.logfile --or-mapfile <(ddrescuelog --xor-mapfile todo todo.original) > newbackup.logfile  Note that you could also use SomeDir.list.done here, but actually comparing todo and todo.original helps in case there were any errors in the last run (so the error sectors will not be marked as done and can be retried later). With backup.logfile updated, I could move on to the next subdirectories, and once all of the important stuff was done, I did the same with a list of all file contents to make sure that all files were properly backed up. # But wait, there's more! Now, I had the contents of all files backed up, so the data was nearly safe. I did however find that the disk contained a number of hardlinks, and/or symlinks, which did not work. I did not dive into the details, but it seems that some of the metadata and perhaps even file content is stored in a special "metadata directory", which is hidden by the Linux filesystem driver. So my filefrag-based "All files"-method above did not back up sufficient data to actually read these link files from the backup. I could have figured out where on disk these metadata files were stored and do a backup of that, but then I still might have missed some other special blocks that are not part of the regular structure. I could of course back up every block, but then I would be copying around 1000GB of mostly unused space, of which only a few MB or GB would actually be relevant. Instead, I found that HFS+ keeps an "allocation file". This file contains a single bit for each block in the filesystem, to store whether the block is allocated (1) or free (0). Simply looking a this bitmap and backing up all blocks that are allocated should make sure I had all data, and only left unused blocks behind. The position of this allocation file is stored in the volume header, just like the catalog file. In my case, it was stored in a single extent, making it fairly easy to parse. The volume header says: 00000070 00 00 00 00 03 a3 60 00 03 a3 60 00 00 00 3a 36 |............:6| ^^^^^^^^^^^^^^^^^^^^^^^ Allocation file size, in bytes: 0x12600000 00000080 00 00 00 01 00 00 3a 36 00 00 00 00 00 00 00 00 |......:6........| ^^^^^^^^^^^ First extent size, in 4k blocks: 0x3a36 ^^^^^^^^^^^ First extent offset, in 4k blocks: 0x1  This means the allocation file takes up 0x3a36 blocks (of 4096 bytes of 8 bits each, so it can store the status of 0x3a36 * 4k * 8 = 0x1d1b0000 blocks, which is rounded up from the total size of 0x1d1aa8f6 blocks). First, I got the allocation file off the disk image (this uses bash arithmetic expansion to convert hex to decimal, you can also do this manually): dd if=/dev/backup of=allocation bs=4096 skip=1 count=$((0x3a36))


Then, I wrote a small python script parse-allocation-file.py to parse the allocate file and output a ddrescue mapfile. I started out in bash, but that got tricky with bit manipulation, so I quickly converted to Python.

The first attempt at this script would just output a single line for each block, to let ddrescuelog merge adjacent blocks, but that would produce such a large file that I stopped it and improved the script to do the merging directly.

cat allocation | ./parse-allocation-file.py > Allocated.notdone


This produces an Allocated.notdone mapfile, in which all free blocks are marked as "finished", and all allocated blocks are marked as "non-tried".

As a sanity check, I verified that there was no overlap between the non-allocated areas and all files (i.e. the output of the following command showed no done/rescued blocks):

ddrescuelog AllFiles.list.done --and-mapfile Allocated.notdone | ddrescuelog --show-status -


Then, I looked at how much data was allocated, but not part of any file:

ddrescuelog AllFiles.list.done --or-mapfile Allocated.notdone | ddrescuelog --show-status -


This marked all non-allocated areas and all files as done, leaving a whopping 21GB of data that was somehow in use, but not part of any files. This size includes stuff like the volume header, catalog, the allocation file itself, but 21GB seemed a lot to me. It also includes the metadata file, so perhaps there's a bit of data in there for each file on disk, or perhaps the file content of hard linked data?

# Nearing the end

Armed with my Allocated.notdone file, I used the same commands as before to let ddrescue backup all allocated sectors and made sure all data was safe.

For good measure, I let ddrescue then continue backing up the remainder of the disk (e.g. all unallocated sectors), but it seemed the disk was nearing its end now. The backup speed (even during the "fast" first 30 seconds) had dropped to under 300kB/s, so I was looking at a couple of more weeks (and thousands of powercycles) for the rest of the data, assuming the speed did not drop further. Since the rest of the backup should only be unused space, I shut down the backup and focused on the recovered data instead.

What was interesting, was that during all this time, the number of reallocated sectors (as reported by SMART) had not increased at all. So it seems unlikely that the slowness was caused by bad sectors (unless the disk firmware somehow tried to recover data from these reallocated sectors in the background and locked up itself in the process). The slowness also did not seem related to what sectors I had been reading. I'm happy that the data was recovered, but I honestly cannot tell why the disk was failing in this particular way...

In case you're in a similar position, the scripts I wrote are available for download.

So, with a few days of work, around a week of crunch time for the hard disk and about 4,000 powercycles, all 1000GB of files were safe again. Time to get back to some real work :-)

Related stories

Modifying a LED strip DMX dimmer for incandescent bulbs

For a theatre performance, I needed to make the tail lights of an old car controllable through the DMX protocol, which the most used protocol used to control stage lighting. Since these are just small incandescent lightbulbs running on 12V, I essentially needed a DMX-controllable 12V dimmer. I knew that there existed ready-made modules for this to control LED-strips, which also run at 12V, so I went ahead and tried using one of those for my tail lights instead.

I looked around ebay for a module to use, and found this one. It seems the same design is available from dozens of different vendors on ebay, so that's probably clones, or a single manufacturer supplying each.

## DMX module details

This module has a DMX input and output using XLR or a modular connector, and screw terminals for 12V power input, 4 output channels and one common connection. The common connection is 12V, so the output channels sink current (e.g. "Common anode"), which is relevant for LEDs. For incandescent bulbs, current can flow either way, so this does not really matter.

Opening up the module, it seems fairly simple. There's a microcontroller (or dedicated DMX decoder chip? I couldn't find a datasheet) inside, along with two RS-422 transceivers for DMX, four AP60T03GH MOSFETS for driving the channels, and one linear regulator to generate a logic supply voltage.

On the DMX side, this means that the module has a separate input and output signals (instead of just connecting them together). It also means that the DMX signal is not isolated, which violates the recommendations of the DMX specification AFAIU (and might be problematic if there is more than a few volts of ground difference). On the output side, it seems there are just MOSFETs to toggle the output, without any additional protection.

## Just try it?

I tried connecting my tail lights to the module, which worked right away, nicely dimming the lights. However:

1. When changing the level, a high-pitched whine would be audible, which would fall silent when the output level was steady. I was not sure whether this came from the module or the (external) power supply, but in either case this suggests some oscillations that might be harmful for the equipment (and the whine was slightly annoying as well).

2. Dimming LEDs usually works using PWM, which very quickly switches the LED on and off, faster than the eye can see. However, when switching an inductive load (such as a coil), a very high voltage spike can occur, when the coil current wants to continue flowing, but is blocked by the PWM transistor.

I'm not sure how much inductance a normal light bulb gives, but there will be at least a bit of it, also from the wiring. Hence, I wanted to check the voltages involved using a scope, to prevent damage to the components.

## Measuring

Looking at a channel output pin on a scope shows the following. The left nicely shows the PWM waveform, but also shows a high voltage pulse when the transistor is switched off (remember that the common pin is connected to 12V, so when the transistor pin is on, it sinks current and pulls the channel pin to 0V, and when it is off, current stops flowing and the pin returns to 12V). The right image shows a close-up of the high-voltage spike.

The spike is about 39V, which exceeds the maximum rating of the transistor (30V), so that is problematic. While I was doing additional testing, I also let some of the magic smoke escape (I couldn't see where exactly, probably the cap or series resistor near the regulator). I'm not sure if this was actually caused by these spikes, or I messed up something in my testing, but fortunately the module still seems to work, so there must be some smoke left inside...

The shape of this pulse is interesting, it seems as if something is capping it at 39V. I suspect this might be the MOSFET body diode that has a reverse breakdown. I'm not entirely sure if this is a problematic condition, the datasheet does not specify any ratings for it (so I suspect it is).

Normally, these inductive spikes are fixed by adding a snubber diode diode. I tried using a simple 1N4001 diode, which helped somewhat, but still left part of the pulse. Using the more common 1N4148 diode helped, but it cannot handle the full current (though specs are a bit unclear when short but repetitive current surges are involved).

I had the impression that the 1N4001 diode needed too much time to turn on, so I ordered some Schottky diodes (which should be faster). I could not find any definitive info on whether this should really be needed (some say regular diodes already have turn-on times of a few ns), but it does seem using Schottkys helped.

The dimmer module supports 8A of current per channel, so I ordered some Schottkys that could handle 8A of current. Since they were huge, I settled for using 1N5819 Schottkys instead. These are only rated for 1A of current, but that is continuous average current. Since these spikes are very short, it should be able to handle higher currents during the spikes (it has a surge current rating of 25A, but that is only non-repetitive, which I'm not sure applies here...).

Here's what happens when adding a 1N5819:

The yellow line is the channel output, the blue line is the 12V input. As you can see, the pulse is greatly reduced in duration. However, there is still a bit of a spike left. Presumably because the diode now connects to the 12V line, the 12V line also follows this spike. To fix that, I added a capacitor between 12V and GND. I would expect that any input capacitors on the regulator would already handle this, but it seems there is a 330Ω series resistor in the 12V line to the regulator (perhaps to protect the regulator from voltage spikes)?

This is what happens when adding a 100nF ceramic capacitor (along with the 1N5819 diode already present):

This succesfully reduces the pulse voltage, but introduces some ringing (probably resonance between the capacitance and the inductance?). Replacing with a 1uF helps slightly:

Note that I forgot to attach the blue probe here. The ringing is still present, but is now much lower in frequency. In this setup, the high-pitched whining I mentinoed before was continuously present, not just when changing the dim level.

I also tried using a 1uF electrolytic capacitor, which seems to give the best results, so I stuck to that. Here's what my final setup gives:

I soldered in these diodes and the cap on the bottom side of the PCB, since that's where I could access the relevant pins:

## Unsolved questions

I also tested with a short LED strip, which to my surprise showed similar surges. They were a lot smaller, but the current was also a lot smaller. This might suggest that it's not the bulb itself that causes the inductive spike, but rather the wiring (even though that was only some 20-30cm) or perhaps the power supply? It also suggests that using this with a bigger LED strip, you might actually also be operating the MOSFETs outside of their specifications...

I'm also a bit surprised that I needed the capacitor on the input voltage. I wonder if there might also be some inductance on the power supply side (e.g. the power supply giving a voltage spike when the current drops)?

Finally, wat causes this difference between the electrolytic and ceramic capacitors? I know they are different, but I do not know off-hand how exactly.

Running an existing Windows 7 partition under QEMU/KVM/virt-manager

I was previously running an ancient Windows XP install under Virtualbox for the occasional time I needed Windows for something. However, since Debian Stretch, virtualbox is no longer supplied, due to security policy problems, I've been experimenting with QEMU, KVM and virt-manager. Migrating my existing VirtualBox XP installation to virt-manager didn't work (it simply wouldn't boot), and I do not have any spare Windows keys lying around, but I do have a Windows 7 installed alongside my Linux on a different partition, so I decided to see if I could get that to boot inside QEMU/KVM.

An obvious problem is the huge change in hardware between the real and virtual environment, but apparently recent Windows versions don't really mind this in terms of drivers, but the activation process could be a problem, especially when booting both virtually and natively. So far I have not seen any complications with either drivers or activation, not even after switching to virtio drivers (see below). I am using an OEM (preactivated?) version of Windows, so that might help in this area.

Update: When booting Windows in the VM a few weeks later, it started bugging me that my Windows was not genuine, and it seems no longer activated. Clicking the "resolve now" link gives a broken webpage, and going through system properties suggests to contact Lenovo (my laptop provider) to resolve this (or buy a new license). I'm not yet sure if this is really problematic, though. This happened shortly after replacing my hard disk, though I'm not sure if that's actually related.

Rebooting into Windows natively shows it is activated (again or still), but booting it virtually directly after that still shows as not activated...

## Creating the VM

Booting the installation was actually quite painless: I just used the wizard inside virt-manager, entered /dev/sda (my primary hard disk) as the storage device, pressed start, selected to boot Windows in my bootloader and it booted Windows just fine.

Booting is not really fast, but once it runs, things are just a bit sluggish but acceptable.

One caveat is that this adds the entire disk, not just the Windows partition. This also means the normal bootloader (grub in my case) will be used inside the VM, which will happily boot the normal default operating system. Protip: Don't boot your Linux installation inside a VM inside that same Linux installation, both instances will end up fighting in your filesystem. Thanks for fsck, which seems to have fixed the resulting garbage so far...

To prevent this, make sure to actually select your Windows installation in the bootloader. See below for a more permanent solution.

## Installing guest drivers

To improve performance, and allow better integration, some special Windows drivers can be installed. Some of them work right away, for some of them, you need to change the hardware configuration in virt-manager to "virtio".

I initially installed some win-virtio drivers from Fedora (I used the 0.1.141-1 version, which is both stable and latest right now). However, the QXL graphics driver broke the boot, Windows would freeze halfway through the initial boot animation (four coloured dots swirling to form the Windows logo). I recovered by booting into safe mode and reverting the graphics driver to the default VGA driver.

Then, I installed the "spice-guest-tools" from spice-space.org, which again installed the QXL driver, as well as the spice guest agent (which allows better mouse integration, desktop resizing, clipboards sharing, etc.). Using this version, I could again boot, now with proper QXL drivers. I'm not sure if this is because the QXL driver was actually different (version number in the device manager / .inf file was 6.1.0.10024 in both cases I believe), or if this was because additional drivers, or the spice agent were installed now.

## Switching to virtio

For additional performance, I changed the networking and storage configuration in virt-manager to "virtio", which, instead of emulating actual hardware, provides optimized interaction between Windows and QEMU, but it does require specific drivers on the guest side.

For the network driver, this was a matter of switching the device type in virt-manager and then installing the driver through the device manager. For the storage devices, I first added a secondary disk set to "virtio", installed drivers for that, then switched the main disk to "virtio" and finally removed the secondary disk. Some people suggest this since Windows can only install drivers when booted, and of course cannot boot without drivers for its boot disk.

I did all this before installing the "spice-guest-tools" mentioned above. I suspect that using that installer will already put all drivers in a place where Windows can automatically install them from, so perhaps all that's needed is to switch the config to "virtio".

Note that system boot didn't get noticably faster, but perhaps a lot of the boot happens before the virtio driver is loaded. I haven't really compared SATA vs virtio in normal operation, but it feels acceptable (but not fast). I recall that my processor does not have I/O virtualization support, so that might be the cause.

As mentioned, virtualizing the entire disk is a bit problematic, since it also reuses the normal bootloader. Ideally, you would only expose the needed Windows partition (which would also provide some additional protection of the other partitions), but since Windows expects a partitioned disk, you would need to somehow create a virtual disk composed of virtual partition table / boot sector merged with the actual data from the partition. I haven't found any way to allow this.

Another approach is to add a second disk with just grub on it, configured to boot Windows from the first disk, and use the second disk as the system boot disk.

I tried this approach using the Super Grub2 Disk, which is a ready-made bootable ISO-hybrid (suitable for CDROM, USB-stick and hard disk). I dowloaded the latest .iso file, created a new disk drive in virt-manager and selected the iso (I suppose a CDROM drive would also work). Booting from it, I get quite an elaborate grub menu, that detects all kinds of operating systems, and I can select Windows through Boot Manually.... -> Operating Systems.

Since that is still quite some work (and easy to forget when I haven't booted Windows in a while), I decided to create a dedicated tiny hard disk, just containing grub, configured to boot my Windows disk. I found some inspiration on this page about creating a multiboot USB stick and turned it into this:

matthijs@grubby:~$sudo dd if=/dev/zero of=/var/lib/libvirt/images/grub-boot-windows.img bs=1024 count=20480 20480+0 records in 20480+0 records out 20971520 bytes (21 MB, 20 MiB) copied, 0.0415679 s, 505 MB/s matthijs@grubby:~$ sudo parted /var/lib/libvirt/images/grub-boot-windows.img mklabel msdos
matthijs@grubby:~$sudo parted /var/lib/libvirt/images/grub-boot-windows.img mkpart primary 2 20 matthijs@grubby:~$ sudo losetup -P /dev/loop0 /var/lib/libvirt/images/grub-boot-windows.img
matthijs@grubby:~$sudo mkfs.ext2 /dev/loop0p1 (output removed) matthijs@grubby:~$ sudo mount /dev/loop0p1 /mnt/tmp
matthijs@grubby:~$sudo mkdir /mnt/tmp/boot matthijs@grubby:~$ sudo grub-install --target=i386-pc --recheck --boot-directory=/mnt/tmp/boot /dev/loop0
matthijs@grubby:~$sudo sh -c "cat > /mnt/tmp/boot/grub/grub.cfg" <<EOF insmod chain insmod ntfs search --no-floppy --set root --fs-uuid F486E9B586E9790E chainloader +1 boot EOF matthijs@grubby:~$ sudo umount /dev/loop0p1
matthijs@grubby:~sudo losetup -d /dev/loop0  The single partition starts at 2MB, for alignment and to leave some room for grub (this is also common with regular hard disks nowadays). Grub is configured to find my Windows partition based on its UUID, which I figured out by looking at /dev/disk/by-uuid. I added the resulting grub-boot-windows.img as a disk drive in virt-manager (I used SATA, since I was not sure if virtio would boot, and the performance of this disk is irrelevant anyway) and configured it as the first and only boot disk. Booting the VM now boots Windows directly. 0 comments -:- permalink -:- 18:13 Calculating a constant path basename at compiletime in C++ In some Arduino / C++ project, I was using a custom assert() macro, that, if the assertion would fail show an error message, along with the current filename and line number. The filename was automatically retrieved using the __FILE__ macro. However, this macro returns a full path, while we only had little room to show it, so we wanted to show the filename only. Until now, we've been storing the full filename, and when an assert was triggered we would use the strrchr function to chop off all but the last part of the filename (commonly called the "basename") and display only that. This works just fine, but it is a waste of flash memory, storing all these (mostly identical) paths. Additionally, when an assertion fails, you want to get a message out ASAP, since who knows what state your program is in. Neither of these is really a showstopper for this particular project, but I suspected there would be some way to use C++ constexpr functions and templates to force the compiler to handle this at compiletime, and only store the basename instead of the full path. This week, I took up the challenge and made something that works, though it is not completely pretty yet. Working out where the path ends and the basename starts is fairly easy using something like strrchr. Of course, that's a runtime version, but it is easy to do a constexpr version by implementing it recursively, which allows the compiler to evaluate these functions at compiletime. For example, here are constexpr versions of strrchrnul(), basename() and strlen(): /** * Return the last occurence of c in the given string, or a pointer to * the trailing '\0' if the character does not occur. This should behave * just like the regular strrchrnul function. */ constexpr const char *static_strrchrnul(const char *s, char c) { /* C++14 version if (*s == '\0') return s; const char *rest = static_strrchr(s + 1, c); if (*rest == '\0' && *s == c) return s; return rest; */ // Note that we cannot implement this while returning nullptr when the // char is not found, since looking at (possibly offsetted) pointer // values is not allowed in constexpr (not even to check for // null/non-null). return *s == '\0' ? s : (*static_strrchrnul(s + 1, c) == '\0' && *s == c) ? s : static_strrchrnul(s + 1, c); } /** * Return one past the last separator in the given path, or the start of * the path if it contains no separator. * Unlike the regular basename, this does not handle trailing separators * specially (so it returns an empty string if the path ends in a * separator). */ constexpr const char *static_basename(const char *path) { return (*static_strrchrnul(path, '/') != '\0' ? static_strrchrnul(path, '/') + 1 : path ); } /** Return the length of the given string */ constexpr size_t static_strlen(const char *str) { return *str == '\0' ? 0 : static_strlen(str + 1) + 1; }  So, to get the basename of the current filename, you can now write: constexpr const char *b = static_basename(__FILE__);  However, that just gives us a pointer halfway into the full string literal. In practice, this means the full string literal will be included in the link, even though only a part of it is referenced, which voids the space savings we're hoping for (confirmed on avr-gcc 4.9.2, but I do not expect newer compiler version to be smarter about this, since the linker is involved). To solve that, we need to create a new char array variable that contains just the part of the string that we really need. As happens more often when I look into complex C++ problems, I came across a post by Andrzej Krzemieński, which shows a technique to concatenate two constexpr strings at compiletime (his blog has a lot of great posts on similar advanced C++ topics, a recommended read!). For this, he has a similar problem: He needs to define a new variable that contains the concatenation of two constexpr strings. For this, he uses some smart tricks using parameter packs (variadic template arguments), which allows to declare an array and set its initial value using pointer references (e.g. char foo[] = {ptr[0], ptr[1], ...}). One caveat is that the length of the resulting string is part of its type, so must be specified using a template argument. In the concatenation case, this can be easily derived from the types of the strings to concat, so that gives nice and clean code. In my case, the length of the resulting string depends on the contents of the string itself, which is more tricky. There is no way (that I'm aware of, suggestions are welcome!) to deduce a template variable based on the value of an non-template argument automatically. What you can do, is use constexpr functions to calculate the length of the resulting string, and explicitly pass that length as a template argument. Since you also need to pass the contents of the new string as a normal argument (since template parameters cannot be arbitrary pointer-to-strings, only addresses of variables with external linkage), this introduces a bit of duplication. Applied to this example, this would look like this: constexpr char *basename_ptr = static_basename(__FILE__); constexpr auto basename = array_string<static_strlen(basename_ptr)>(basename_ptr); \  This uses the static_string library published along with the above blogpost. For this example to work, you will need some changes to the static_string class (to make it accept regular char* as well), see this pull request for the version I used. The resulting basename variable is an array_string object, which contains just a char array containing the resulting string. You can use array indexing on it directly to access variables, implicitly convert to const char* or explicitly convert using basename.c_str(). So, this solves my requirement pretty neatly (saving a lot of flash space!). It would be even nicer if I did not need to repeat the basename_ptr above, or could move the duplication into a helper class or function, but that does not seem to be possible. 0 comments -:- permalink -:- 21:33 Automatically remotely attaching tmux and forwarding things I recently upgraded my systems to Debian Stretch, which caused GnuPG to stop working within Mutt. I'm not exactly sure what was wrong, but I discovered that GnuPG version 2 changed quite some things and relies more heavily on the gpg-agent, and I discovered that recent SSH version can forward unix domain socket instead of just TCP sockets, which allows forwarding a gpg-agent connection over SSH. Until now, I had my GPG private keys stored on my server, Tika, where my Mutt mail client also runs. However, storing private keys, even with a passphrase, on permanentely connected multi-user system never felt quite right. So this seemed like a good opportunity to set up proper forwarding for my gpg agent, and keep my private keys confined to my laptop. I already had some small scripts in place to easily connect to my server through SSH, attach to the remote tmux session (or start it), set up some port forwards (in particular a reverse port forward for SSH so my mail client and IRC client could open links in my browser), and quickly reconnect when the connection fails. However, once annoyance was that when the connection fails, the server might not immediately notice, so reconnecting usually left me with failed port forwards (since the remote listening port was still taken by the old session). This seemed like a good occasion to fix that as wel. The end result is a reasonably complex script, that is probably worth sharing here. The script can be found in my scripts git repository. On the server, it calls an attach script, but that's not much more than attaching to tmux, or starting a new session with some windows if no session is running yet. The script is reasonably well-commented, including an introduction on what it can do, so I will not repeat that here. For the GPG forwarding, I based upon this blogpost. There, they suggest configuring an extra-socket in gpg-agent.conf, but I've found that gpg-agent already created an extra socket (whose path I could query with gpgconf --list-dirs), so I didn't use that extra-socket configuration line. They also talk about setting StreamLocalBindUnlink to clean up a lingering socket when creating a new one, but that is already handled by my script instead. Furthermore, to prevent a gpg-agent from being autostarted by gnupg serverside (in case the forwarding fails, or when I would connect without this script, etc.), I added no-autostart to ~/.gnupg/gpg.conf. I'm not running systemd user session on my server, but if you are you might need to disable or mask some ssh-agent sockets and/or services to prevent systemd from creating sockets for ssh-agent and starting it on-demand. My next step is to let gpg-agent also be my ssh-agent (or perhaps just use plain ssh-agent) to enforce confirming each SSH authentication request. I'm currently using gnome-keyring / seahorse as my SSH agent, but that just silently approves everything, which doesn't really feel secure. 0 comments -:- permalink -:- 16:46 Running Ruby on Rails using Systemd socket activation On a small embedded system, I wanted to run a simple Rails application and have it automatically start up at system boot. The system is running systemd, so a systemd service file seemed appropriate to start the rails service. Normally, when you run the ruby-on-rails standalone server, it binds on port 3000. Binding on port 80 normally requires root (or a special capability enabled for all of ruby), but I don't want to run the rails server as root. AFAIU, normal deployments using something like Nginx to open port 80 and let it forward requests to the rails server, but I wanted a minimal setup, with just the rails server. An elegant way to binding port 80 without running as root is to use systemd's socket activation feature. Using socket activation, systemd (running as root) opens up a network port before starting the daemon. It then starts the daemon, which inherits the open network socket file descriptor, with some environment variables to indicate this. Apart from allowing privileged ports without root, this has other advantages such as on-demand starting, easier parallel startup and seamless restarts and upgrades (none of which is really important for my usecase, but it is still nice :-p). ## Making it work To make this work, the daemon (rails server in this case) needs some simple changes to use the open socket instead of creating a new one. I could not find any documentation or other evidence that Rails supported this, so I dug around a bit. I found that Rails uses Rack, which again uses Thin, Puma or WEBrick to actually set up the HTTP server. A quick survey of the code suggests that Thin and WEBrick have no systemd socket support, but Puma does. I did find a note saying that the rack module of Puma does not support socket activation, only the standalone version. A bit more digging in my Puma version supported this, but it seems that some refactoring in the 3.0.0 release (commit) should allow Rack/Rails to also use this feature. Some later commits add more fixes, so it's probably best to just use the latest version. I tested this succesfully using Puma 3.9.1. One additional caveat I found is that you should be calling the bin/rails command inside your rails app directory, not the one installed into /usr/local/bin/ or wherever. It seems that the latter calls the former, but somewhere in that process closes all open file descriptors, losing the network connection (which then gets replaces by some other file descriptor, leading to the "for_fd: not a socket file descriptor" error message). ## Setting this up After setting up your rails environment normally, make sure you have the puma gem installed and add the following systemd config files, based on the puma examples. First, /etc/systemd/system/myrailsapp.socket to let systemd open the socket: [Unit] Description=Rails HTTP Server Accept Sockets [Socket] ListenStream=0.0.0.0:80 # Socket options matching Puma defaults NoDelay=true ReusePort=true Backlog=1024 Restart=always [Install] WantedBy=sockets.target  Then, /etc/systemd/system/myrailsapp.service to start the service: [Service] ExecStart=/home/myuser/myrailsapp/bin/rails server puma --port 80 --environment production User=myuser Restart=always [Install] WantedBy=multi-user.target  Note that both files should share the same name to let systemd pass the socket to the service automatically. Also note that the port is configured twice, due to a limitation in Puma. This is just a minimal service file to get the socket activation going, there are probably more options that might be useful. This blogpost names a few. After creating these files, enable and start them and everything should be running after that:  sudo systemctl enable myrailsapp.socket myrailsapp.service
\$ sudo systemctl start myrailsapp.socket myrailsapp.service


1 comment -:- permalink -:- 10:35
Showing 1 - 10 of 168 posts