Category Archives: Blog

Find the type of a variable in Rust

I was trying to make an abstraction in Rust and found myself navigating a lot of code to determine the type of pwm below so I could put it in a struct:

let mut pwm = Timer::tim2(dp.TIM2, &clocks, &mut rcc.apb1)
    .pwm::<Tim2NoRemap, _, _, _>(c1, &mut afio.mapr, 1.khz());

A quicker way to find it is to add an empty type, : () and check the compilation error:

let mut pwm: () = Timer::tim2(dp.TIM2, &clocks, &mut rcc.apb1)
    .pwm::<Tim2NoRemap, _, _, _>(c1, &mut afio.mapr, 1.khz());

I’m using Visual Studio Code with the Rust Analyzer plugin and there I immediately get the error and can copy the type by hovering on the line:

So there you have it, the type was as simple as this – no wonder I gave up on trying to find it in the code! 😅

type PwmType = hal::pwm::Pwm<
    hal::pac::TIM2,
    hal::timer::Tim2NoRemap,
    hal::pwm::C1,
    hal::gpio::gpioa::PA0<hal::gpio::Alternate<hal::gpio::PushPull>>,
>;

How to close an unresponsive SSH session

Enter, Tilde, Period
Enter, Tilde, Period

If you work on embedded Linux systems, those Docker container thingies, play around with Raspberry Pi or anything in between, you may occasionally find yourself on your main computer connected to the target via SSH when it suddenly hangs or gets rebooted, rendering your SSH session unresponsive. Sometimes when this happens, the SSH session gets disconnected automatically and you can start a new one. In other cases, the terminal with SSH open freezes and your usual hotkeys like ctrl + D do nothing. This is because ctrl + D key presses are usually transferred to the target and the disconnecting is so to say engaged from there.

If only there was a hotkey that was detected by the SSH client before it tries to send it to the target 🤔. You guessed it, there is! The sequence + ~ + . lets you disconnect from an unresponsive target without having to close the terminal or anything like that. To be clear, you press the keys one by one, no need to hold them all at once.

It does not work!

I had some troubles getting this to work reliably. Two things that helped me:

  • Realizing the Enter key is part of the sequence. The first time I read about this, the sequence was said to be just Tilde + Period. Often when working in a terminal, Enter is the last key that was pressed so this works fine most of the time. Remember Enter as part of the sequence!
  • Remembering how to type Tilde on my keyboard layout. Because I use a Swedish keyboard, Tilde is actually alt + ¨ , space, where ¨ is the key with ^ and ¨ on it. Since there is no visual feedback when typing in the escape sequence (which is now five key presses), I would sometimes get it wrong.
Tilde on a Swedish keyboard is actually Alt + ¨ , Space

Strict Aliasing – yet another way for C-code to blow up

Recently, I got to learn about Strict Aliasing in C. It is yet another thing that can cause your C code work perfectly fine today and then blow up because of Undefined Behavior down the line. One example of what not to do is casting an array of uint8_t (like a payload from a communications protocol) into a struct (like the message you are receiving):

void receive_data(uint8_t * payload, uint16_t length) {
    ... // Sanity checking etc
    my_struct_t * my_struct = (my_struct_t *) payload; // Don't do this!
    do_stuff(my_struct->some_field);
}

A better way is to use memcpy:

void receive_data(uint8_t * payload, uint16_t length) {
    ... // Sanity checking etc
    my_struct_t my_struct;
    memcpy(my_struct, payload, sizeof(my_struct_t)); // Do this instead!
    do_stuff(my_struct.some_field);
}

One reason this kind or “reinterpret cast” is not allowed is that you can’t be sure that accessing a field within the struct after typecasting will be a properly word-aligned memory access.

For more details, here is a write-up with more examples which also explains the situation for C++: https://gist.github.com/shafik/848ae25ee209f698763cffee272a58f8

Embedded Rust Toolchains

I recently started learning Embedded Rust. As I mentioned at the top of the last post, there are a couple of toolchain options:

  • OpenOCD + GDB
  • probe-rs + cargo-embed

Turns out there is also a third one that I just discovered:

  • probe-rs + probe-run

This was a little confusing at first, I was unsure what was the better option and how these projects all connect to each other. Embedded Rust seems to be moving fast, so this might get outdated but here is a basic summary if you are just getting into Embedded Rust as well.

The Embedded Rust way

Rust-embedded is a working group in the official Rust organization. Among other things, they maintain The Embedded Rust Book, which you may have come across. In the book, they describe what I would call the “official” toolchain, using OpenOCD and ARM-compatible GDB (gdb-multiarch). This is probably the way to go if you need to do serious development today. OpenOCD and GDB are stable and mature projects and for a Rust programmer, it is the most fully-featured and reliable option right now.

Enter probe-rs

As an alternative to those external (non-Rust) dependencies, a team has formed an ambitious project around replacing it all with software written in Rust – probe-rs. Here is an illustration (borrowed from the video below):

This illustration shows nicely what probe-rs tries to be.

Here is a very informative talk by one of the people behind probe-rs:

My major learning from this talk was that probe-rs is really a library. Other projects, like cargo-embed and probe-run are built on top.

Cargo-embed

The probe-rs team built cargo-embed to show off the capabilities of probe-rs. As such, it was the first tool I came across when I found the official probe-rs website. One might imagine that being built by the same team, cargo-embed will stay closer to the latest features of probe-rs and have a shorter path to get new features in. But this is just speculation.

To build and upload programs, you simply run cargo embed --release (see my last post about why --release is important for timing). It possible to do logging with rtt, a debugger-based thing that uses an internal buffer that gets read out by the debugger instead of, for example, printing over UART. Debugging is also supported (but not at the same time as rtt currently) from the command line, or visually in something like Visual Studio Code, by hooking into the GDB stubs that probe-rs provides. This is an interface that (to the best of my understanding) mimics GDB’s, but actually goes directly to probe-rs.

Configuration is done in a new file called Embed.toml. There you configure what chip you are using and whether to use rtt or GDB debugging, or set up separate profiles for each.

The vision of probe-rs is to offer a full development environment for embedded Rust, so they are also working on a VSCode plugin. It is still in alpha, and I have not tried it yet.

Probe-run

Ferrous Systems is a company that pops up everywhere in embedded Rust. They are a consultancy specializing in Rust for embedded applications and are also very active in open source development for embedded Rust. They started a project called Knurling dedicated to improving the experience working with embedded Rust.

Knurling has many sub-projects and probe-run is one of them. Built on top of probe-rs, it gives you the same features as cargo-embed, but in a slightly different packaging. The philosophy is that embedded development should work the same way as native development, so instead of introducing a new cargo command, probe-run is a so called Cargo runner. This means you configure the “usual” cargo run command to use probe-run under the hood. And there is no new configuration file to keep track of, just the regular Cargo.toml and .cargo/config.toml. Does it matter? Up to you.

Knurling also has an interesting logging framework; defmt. Instead of doing string formatting on the embedded device, it relies on a tool on the host side and simply generates a list of strings at compile time that is kept on the host. The embedded device then simply sends the index into that list (using rtt), causing much less overhead.

I do like the idea of keeping the main Rust interface unchanged, which speaks in favor of probe-run, but I’m not sure about plans for integrating probe-run with VSCode, or debugging with breakpoints. As I learn more, I hope to find a favorite and maybe also start contributing myself.

Embedded Rust: Timer Timeout Problem

TL;DR: When doing timing critical stuff, use the --release flag to get a faster binary!
For example: cargo embed --release.

I’m learning Embedded Rust on a STM32 Bluepill board (with a STM32F103 microcontroller). At the time of writing there seems to be two toolchain options:

  1. The “official” Embedded Rust way, using OpenOCD and ARM-compatible GDB.
  2. Up-and-coming probe-rs that is working on having everything in Rust and installable via cargo. Their tool cargo-embed basically replaces OpenOCD and GDB.

OpenOCD + GDB is true and tested, but a lot more work to set up. Probe-rs is litteraly just cargo install cargo-embed, but it is a work in progress and far from feature-complete. I tried both, but this particular thing caught me while using cargo-embed, so that’s the command I will be showing.

The Timer Problem

I wanted to talk to a ws2812 adressable RGB LED (also known as NeoPixel). I found the crate smart-leds that seemed perfect. It comes with several “companion crates” with device drivers that support different LEDs and several options for different ways of driving the ws2812, like the ws2812-spi and ws2812-timer-delay.

The SPI crate unfortunately did not work in my attempts so far. It manages to write to my LED once, then panics with the error “Overrun”. Probably I’m using a newer version of the embedded-hal and/or stm32f1xx-hal than it was written for. Maybe a topic for another day.

The Timer Delay crate also did not work at first. I broke out my Analog Discovery 2 to look at the data signal:

Delta-X column below the graph shows the time between the red lines – right above 200 us.

The time between bits was around 200 us. To get a comparison, I fired up a Platformio project for the same STM32 Bluepill board and imported Adafruit’s Neopixel library. Now, the LED of course worked perfectly and the problem was obvious:

With Adafruit’s Arduino library, the time between pulses is around 1.4 us.

The time between the bits was now only around 1,4 us. I will spare you the details of all the things I tried while wrongly thinking either the entire MCU or the timer was running at the wrong frequency.

The solution turns out to be almost silly: Rust binaries can be really slow if you do not compile them in release mode. Just add the --release flag and all is well! 💩

Solution:

cargo embed --release

There is apparently a way to override this per-dependency in Cargo.toml, that might be worth a try if you need it.

Update:

I tried adding the following to Cargo.toml to make all dependencies build with the highest optimization level, but this still was not enough to make the LED work in my case.

# Cargo.toml 
[profile.dev.package."*"]
opt-level = 3

I also tried increasing the optimization level for the whole dev profile. This worked already from level 2:

# Cargo.toml 
[profile.dev]
opt-level = 2

Stepping through code compiled like this with a debugger might not work as well though so you might as well use the release profile all the time and only drop down to dev for debugging.

NB-IoT and LTE-M Covarage Maps

Here are some links to coverage maps for NB-IoT and LTE-M in Scandinavia. The GSMA also has a global deployment map here:
https://www.gsma.com/iot/deployment-map/

Denmark

Finland

Norway

Sweden

Talk: Cellular Connectivity for IoT

In 2018, I had the great honor to speak at the NDC conference in Oslo.  At the time, I was working with cellular connectivity for IoT at nordic mobile operator Telia, and I titled the talk accordingly. NDC is mainly a developer conference, so the talk was intended as an introduction to cellular IoT for the “Rasppberry Pi and Arduino crowd” that I anticipated would show up. I went into the difference between NB-IoT and LTE-M as well as between chips, modules and boards. Probably the best part however, if I may say so myself, was the last one, where I showed a live demo of working with a couple of development kits.

Kodama Trinus 3D-printer upgrades

Kodama Trinus 3D-printer/laser engraverAt work, we recently got the Kodama Trinus combined 3D-printer and laser engraver. I’m pretty happy with the overall quality of the printer so far, but for our use I immediately identified some areas of improvement:

  • No power switch – the only way to turn the printer off is by unplugging the cord.
  • No lights inside the enclosure – we got the additional enclosure, but it came without any lights inside.
  • No network interface – you have to connect to the printer via USB directly or go get the SD-card. Also no way to control or monitor it remotely.

So to fix this, I wanted to put a switch on the back, put in some lights and add a Raspberry Pi with a camera and Octoprint inside the printer. I started by printing a faceplate for the hole in the back of the enclosure:

Trinus back plate

This was then fitted with a switch and a DC jack for the power supply. (Make sure their rating is higher or equal to that of the power supply!) The jack lets me split off power to the Raspberry Pi and also fixes the slight annoyance of having to reach in to the very back of the enclosure to connect the cable to the printer.

Trinus back plate

I then soldered and crimped some cables to wire the switch in series with the DC jack and connected two Wago cage clamps for distributing power to the multiple things on the inside of the enclosure. I also mace a short DC cable to connect to the printer. The face plate has a flange to keep it from moving around in the hole, and is less than half the thickness of the enclosure wall, so I printed a second copy of it, put one on from either side and simply secured the whole thing with the locking nut of the DC jack.

Trinus back plate mounted

Next, I added some LED-strips on the sides. Note that the enclosure is laying upside-down, so they are actually in the ceiling.

Trinus LED-strip installed

 

The power supply gives out 12 volts, which is fine for connecting to the printer and the particular LED-strips that I had. The Raspberry Pi however, requires a 5 V power supply, so I wired in a step-down converter and a micro-USB cable.

Trinus 5 V step down converter

Finally, I put the enclosure back on and connected the DC-jack and the Raspberry Pi. Here is what it looks like in the back of the printer now.

Trinus installation complete

Trinus upgraded backside

For now, the Raspberry Pi is just laying inside the enclosure and the camera is simply stuck to the back wall with double-sided tape. I might come up with something smarter in the future, but it works for now.

Trinus with Raspberry Pi and camera

The LED-lighting made a huge improvement! It was also completely necessary  for the Raspberry Pi camera to be useful.

Trinus LED-light upgrade

To sum up, these improvements took less than a day to do in total and were fairly inexpensive, but provide a huge step up in usability. We can now upload prints and monitor the progress from our desks instead of going to the printer all the time. Octoprint also has a Cura plugin so that you can simply upload STL-files directly without the need for everyone to have a slicer installed locally. This also means we can have optimized settings on the printer and not have to distribute settings to each individual using the printer.

One caveat is that the Trinus LCD display does not work with Octoprint, meaning that you cannot stop the print or use any of the other features on the front panel but have to run back to the computer to stop a failed print. I might replace the LCD with a small touch screen connected to the Raspberry Pi instead and/or wire in an emergency stop button to the GPIO pins. Also, the LED lights flicker quite a bit as the printer draws more or less power, probably due to the poor-quality power supply. I might try to fix it with some decoupling capacitors and/or a new power supply.

Let me know if you did similar upgrades, have some good ideas for the Raspberry Pi and camera or if you just want some pointers on doing this to your own printer!

Edit:

Uploaded the faceplate to Youmagine.

Alarm Clock v0.1.0

Alarm clock prototype

We wanted to try banning phones from the bedroom (you should try, I recommend it!). Clearly, a suitable hardware replacing the alarm clock app was needed. Having thought about building my own alarm clock for a while, I quickly determined it was not a viable option to just go buy one – there simply did not exist a model with all the features I had thought of and now needed to have, like for example:

  • Weekly schedule (no alarm on weekends)
  • Smarter snooze (configurable and longer)
  • Integrated with wakeup-lights and the rest of the appartment
  • Configurable from other devices
  • Programmable/extendable with future ideas

Alarm clock prototype parts

For the first prototype, I used some parts I had laying around:

  • Raspberry Pi A+ with USB wifi dongle
  • 1.8″ TFT display (check Ebay for “HY-1.8 SPI”)
  • Some prototyping board, connectors and pushbuttons
  • Small speakers with 3.5 mm jack

The firstAlarmclock wiring step was to connect the display. The one at hand communicated over SPI, which all Pi’s support, and hooking it up was not too difficult. Then, however, I spend quite some time trying to make the Pi recognize it as a screen rather than handling the SPI commands to it directly in my code. (Doing that would mean the interface could be a webpage for example, which would make it easier to develop.)

Using an SPI TFT as a monitor had been achieved already and made quite a buzz on Hackaday back in 2012 or so, but unfortunately it was not so easy to reproduce. At the time of building this in late 2016, most documentation I could find was still from 2012-2013 and talked about compiling the kernel from scratch and a frame buffer driver called fbtft. But, once I found its official Github repository, the first thing in the readme was (and still is) a message from early 2015 saying the driver has moved into Linux staging and that development there has ceased:

2015-01-19
The FBTFT drivers are now in the Linux kernel staging tree [...]
Development in this github repo has ceased.

I could not find any signs if fbtft is now actually part of Raspbian nor any comprehensible documentation on how to set it up so I ran out of patience and decided to go with direct SPI control for the first version, meaning less fancy graphics for now. For direct SPI, finding examples was a little easier and by learning from the code on w8bh.net, I finally got something working.

The Pi now runs a fairly simple Python script, listening to the buttons and updating the display every minute, playing an mp3 file at increasing volume if it is wakeup time. The display shows current time and the time of the alarm (which only runs monday – friday). Two of the buttons are used to move the alarm time back or forth in 15 minute intervals. This can be used to change the alarm, snooze or skip it in the morning. The third button stops the alarm and the fourth toggles the Philips Hue lights in the bedroom on/off. The Hue lights are actually controlled via MQTT, from a Node Red server running on a separate Raspberry Pi,  acting as a hub for this and some other “smart home” features which might be a topic for a future post.

Alarm clock interface

All-in-all we are pretty happy with this first version, it has been in live use for four months now without any major malfunctions and basically it just needs an enclosure. It does lack some obvious features, like adding an alarm on the weekend and actually moving the time the wakeup-light starts along with the alarm time. Also, when I make a new version, probably I will add another button for starting the coffee maker as well. 😁

Autonomous RC Racing

Here is a project I have been working with on and off for about a year – Alvin the autonomous RC car. The robot is built to comply with the rules of two Swedish robot competitions, Robot SM and Stockholm Robot Championship, where the objective is to race three other robots around a track without any form of remote control. Rules vary somewhat, but each heat typically lasts until one robot reaches 7 laps, or for max 3 minutes. The robots are then given points according to the number of laps they have completed at this point. The participants can also signal the organizer to flip or turn the robot if it crashes or gets stuck, at the cost of one point.

Here is Alvin in action, racing some other robots from the Norbot team at a recent meetup:

Parts

  • Wltoys A222 RC car
  • Texass Instruments TM4C123G Launchpad microcontroller evaluation board
  • My own custom Launchpad protoboard booster-pack
  • Sharp GP2Y0A21YK0F analog distance sensors from Ebay
  • “10A Brushed ESC Motor Speed Controller for RC Car without Brake” off Ebay
  • A3144 hall effect switch, also from Ebay
  • Rare earth magnets (D4x2mm), Ebay as well

Operation

Since Alvin is built on a RC car, both the steering servo and throttle ESC have a maximum update rate of once per 20 ms, or 50 times per second. For the first implementation, therefore, the processor simply runs the algorithms for calculating new steering and throttle values once every 20 ms and then just waits in between.

For the steering, Alvin uses two Sharp distance sensors to measure the distance to the side walls and a third to detect obstacles in front. To stay in the middle of the track, it simply compares the distance to the walls and compensates the steering to make them equal. This rather simple approach could probably be improved a lot to gain more speed.

Since the track contains a hump, the power to the motor can not be hardcoded – it needs to increase in the uphill. To control the speed, a hall effect sensor is reading two magnets mounted on the drive shaft. The algorithm converting pulses from this sensor to a throttle setting has so far proven to be the hardest to implement, mainly because there are only two pulses per revolution of the drive shaft. This means there are normally just 0-2 new pulses recorded  every time the algorithm runs, making it hard to determine the actual speed. To work around this, the delta in revolutions is instead calculated 10 iterations back in time. The resulting speed value is then used as input to a PI controller which calculates the throttle.

Results

Alvin raced for the first time last year at Stockholm Robot Championship and I was very happy to finish 6th without ever tuning it on a full size track before! I have since worked on the throttle algorithm to make it as responsive as in the video above, so I think it could do even better today.

Future improvements

  • Extend front bumper to go around the wheels to avoid getting stuck against the wall.
  • Add a speed sensor with more pulses per revolution to allow better throttle control.
  • Add more distance sensors to enable a more advanced steering algorithm.
  • Add roll cage or body to protect the electronics and improve the looks.

Conclusion

Autonomous robot racing is a fun and fairly affordable way to put your combined engineering skills to the test, including both mechanical, electrical and software components. Being new in my city, it has also been a way to meet like-minded people and a reason to go down to the local hackerspace.

Finally, here is a playlist from Stockholm Robot Championship 2016 if you want to see some more action!