Léonard Urban

Here I post longer form content. Main mastodon account: @leonardurban@tooting.ch

This month I've been quite busy with University work and had less time to focus on Fripuck. Still, I've managed to add new sensors and solve some pesky errors !

New sensors

The e-puck2 robot is equipped with a ring of 8 proximity sensors (IR reflectors). I followed the architecture of the previous implementation. It cleverly turns the IR ligths of the sensors on and off at specific time intervals to limit interference and measure the ambient light levels.

This is a significant step-up of my previous implementation, which kept the IR lights always on and scanned the sensors continuously. Not only did this produce interference in the results, but also increased power consumption.

This month also brought three sensors living on the I²C connection: the Time-of-Flight (ToF), the Inertial Measurement Unit (IMU) and the ground sensors.

I²C stability issues

I²C is a communication protocol where a master can communicate to multiple slave on the same wire. On the e-puck2, three sensors are configured and transfer their data over I²C: the ToF, IMU and ground sensors.

At first, the I²C connection seemed to work for the ToF, but wouldn't for the other sensors, always returning HAL_BUSY error. Looking online, I found that I²C can be quite unstable on some STM chips, especially when using the standard HAL_I2C_Mem_Read/Write functions. I decided to copy the implementation of the vl53l0x ToF api code, which used the HAL_I2C_Master_Receive/Transmit functions directly.

Here is how my custom 2c_read/write_reg was implemented, if it can be of use for anyone:

HAL_StatusTypeDef i2c_read_reg(uint8_t dev_addr, uint8_t reg, uint8_t *buffer, uint16_t len)
{
    HAL_StatusTypeDef res;
    osMutexAcquire(i2c_mutex, osWaitForever);

    // Announce which device/register will be sent to
    res = HAL_I2C_Master_Transmit(i2c_handle, (dev_addr << 1), &reg, 1, 100);

    if (res == HAL_OK)
    {
        // Recieve data from the slave
        res = HAL_I2C_Master_Receive(i2c_handle, (dev_addr << 1), buffer, len, 100);
    }

    osMutexRelease(i2c_mutex);
    return res;
}

HAL_StatusTypeDef i2c_write_reg(uint8_t dev_addr, uint8_t reg, uint8_t *buffer, uint16_t len)
{
    // Local buffer to combine reg + data
    uint8_t tmp[len + 1];
    tmp[0] = reg;
    memcpy(&tmp[1], buffer, len);

    osMutexAcquire(i2c_mutex, osWaitForever);
    HAL_StatusTypeDef res = HAL_I2C_Master_Transmit(i2c_handle, (dev_addr << 1), tmp, len + 1, 100);
    osMutexRelease(i2c_mutex);

    return res;
}

Python API

The python API now internally exposes an Abstract Base Class that makes it very easy to implement the reception of new sensor data. This framework has been used to implement all the sensor management in the python API, significantly reducing code duplication.

What's next ?

Next month, I'll be working more on my upcoming exams, so it will probably be a quite uneventful month.

In the available time I have, I will try to come op with a good priority system to manage which sensor data gets packaged when and sent. Currently, if I let too many sensors run at once, the Flatbuffers packet will consume so much ram that the program crashes. My objective is to smartly cut up the incoming data streams to control the final size of my Flatbuffers packet and make sure no streams are getting starved.

Wrapping up

While this month didn't bring as many features as I would have hoped, I am still happy with what I achieved considering the limited amount of free time at my disposal. I am also pleased to see the I²C issue fixed, which has been bothering me for some time now.

Hi there 👋 I'm Uhrbaan, and for my bachelor thesis, I am working on updating the e-puck2's codebase and API to better fit the needs of my university.

This blog serves as a monthly update on the progress I make on that project over the next year.

What is the e-puck ?

The robot was designed by the EPFL, who describe them the following:

The e-puck is an educational robot that helps generations of students learn about embedded systems and robotics. First developed at EPFL in 2004 by Francesco Mondada and Michael Bonani, a new version was released in 2018, produced by GCtronic in Ticino. [source]

Essentially, they are small robots equipped with many small sensors to help students make their first steps into mobile robotics.

The e-puck2 robot and its many sensors

The e-pucks used at my university are built and maintained by GCtronic, a small Ticinese company. They are also the authors of the main code running on these robots and also produced a C API to talk to the robots over the network. This API was later replaced by a Python API made by a university student during their bachelor thesis (just like me), which later also introduced computer vision capabilities through YOLO.

What is Fripuck ?

Fripuck is my addition to this educational tool (if it works out 😅) ! The idea to work on the e-puck2 as my bachelor thesis rose out of two frustrations I encountered while working with the robots: (1) the blocking nature of the API and (2) the focus on real-time robotics, limiting the ability for data analysis/signal processing.

Good software is born of frustration

— Someone, probably

A second reason I started this project, and why I am not only modifying/rewriting the API but also the firmware, is that I got really interested in embedded development. The university sadly doesn't have courses about embedded programming, so I figured the best I could do was learn that subject on my own, and with my limited time, doing it during my bachelor thesis seemed to be the best option.

Fripuck itself is the combination of three pieces of software: the firmware of the STM32F4 chip that controls the robot and all the sensors, the firmware of the ESP32 responsible for the communication over Wi-Fi with the student's computer, and finally the API (Python or Go for testing). The name of the project is a combination of e-puck and Fribourg, the university with which I am doing my thesis.

What do I want to achieve? I technically already started the project last semester, so about 4 months ago, just as an “exploratory” phase. This helped me to read a bit through the existing code base, get familiar with the hardware, and explore the limitations of the chip. For example, I got a Lua VM working on one of the two chips and controlling the LEDs with it.

For the moment, I am mostly working on re-implementing the firmware and the API to reach feature parity (mostly, some features I do not care about since the university doesn't use them) with the old software, the status quo, while still keeping some nice improvements, like the API being multi-threaded and asynchronous by default, and the firmware using a more modern build system/development environment (PlatformIO), moving away from a custom serialization protocol to use Flatbuffers and using a real-time operating system more common in the academic world (FreeRTOS over ChibiOS).

The progress can be tracked on GitHub at https://github.com/Uhrbaan/fripuck2 (at the time of writing, the table is mostly empty and the README.md isn't even fully finished yet).

Ideally, if I have the time, I would like to add new features as well, like a full audio stream from the robot to the clients, which would enable voice commands; a Lua VM to make the robots work offline; or even audio playback, which could make the robots talk, maybe transforming them into AI assistants, who knows! 🤷 Another big goal would be to increase the video playback speed, but I doubt it is possible to achieve more than 10 or 15 frames per second.

What have I achieved so far?

Right now, most of the foundations have been laid out. I have a rather precise vision of the architecture of the whole system; the firmware is working, the Python API is working, and the robot can send and receive packets serialized as FlatBuffers. The only important foundational work that is remaining is managing the commands coming from the client.

Once that is working, there will be a lot of work to re-add all the sensors, although I will probably be able to use a lot of the existing code to achieve this.

What are you working on right now?

Currently, I am testing the telemetry process (the robot reads a sensor, packages it, sends it over the network, and the client receives it) by implementing the simple time-of-flight sensor and sending that data over the network. Once that is working, I will proceed to work on the commands.

Wrapping up

So far, I've been really enjoying the process. C was the first language I ever learned, and I hope this project is going to elevate my C skills to the next level. I am also having fun discovering the world of embedded programming, although I don't find it as welcoming as other domains, since documentation and tutorials aren't as readily available. I often find myself having to read through multiple example projects to understand what I am supposed to do, but on the bright side, code reading and comprehension is equally as important of a skill to train as writing it (since AIs are going to take that away apparently 🤷).

Next month, I'll probably go a bit more into the architecture I've planned and some technical challenges I've come across. Anyway, if you stayed through the whole text, thank you for your time !

I’ve been using the Slimbook EVO 14 for a few days now, and here are my impressions. This is also a test of writefreely—let’s see how it goes! 😉

My Old Laptop

As a computer science (and biology) student, I mainly use my laptop for taking notes and doing homework. Beyond that, I’m a Linux hobbyist and enjoy programming for fun.

Before switching to the Slimbook, I used a Lenovo Ideapad C340 14'' Intel with 1TB of storage and 16GB of RAM. While it served me well for about six years, several issues became hard to ignore:

  • Screen quality: The 1080p resolution was too low for comfortable text reading, and the viewing angles were poor.
  • Battery life: The 40Wh battery degraded over time, lasting only about 1.5 hours in power-saving mode by the end.
  • Graphics performance: The integrated GPU struggled with anything beyond lightweight games (like Minecraft at 720p) and limited external 4K displays to 30Hz.
  • Build quality: The chassis cracked in places, and I had to use duct tape to hold the hinges together.
  • Keyboard failure: An entire row of keys stopped working, rendering the laptop unusable without an external keyboard.

Choosing a New Laptop

My new laptop needed to meet the following criteria:

  • Better performance
  • More RAM
  • Improved screen
  • Under 1,000 CHF
  • Good Linux support
  • From a European company (here’s why)

These last two points narrowed my options to Tuxedo and Slimbook. Since I carry my laptop all day, I wanted something under 15''. This left me with the Slimbook EVO and the Tuxedo InfinityBook—identical machines in terms of specs. I chose the Slimbook EVO because it was cheaper and came with GNOME preinstalled.

Review

Hardware

The Slimbook EVO feels well-built, especially compared to my old laptop. The aluminum chassis is sturdy and premium. The screen is a significant upgrade: better viewing angles and resolution, though the scaling is a bit awkward. At 100%, everything is too small; at 200%, too large. Fractional scaling works, but may affect battery life. The 120Hz refresh rate is smooth, but I disabled it to save power—it’s not essential for my workflow.

The only downgrade from my previous laptop is the lack of a touchscreen, but since I use an external drawing tablet, it’s not a dealbreaker.

Performance-wise, this laptop is a breath of fresh air. It boots in under five seconds, handles Minecraft at full resolution (with shaders!), and supports 4K displays at 60Hz. My model has 32GB of RAM and 500GB of storage. The extra RAM is a relief, and while the storage is half of what I had before, 500GB is more than enough for my needs.

Software

Buying from a Linux-first vendor means most drivers are preinstalled. However, I decided to install Ubuntu 25 from scratch—perhaps not the smartest move. The installation went smoothly, but I had to manually install a few packages, like the Ethernet driver (guide here).

Facial recognition via howdy doesn’t work yet, as it depends on a library not yet ported to Ubuntu 25. For now, I’ll stick with typing my password.

I also replaced Slimbook’s slimbook-battery (which uses TLP) with power-profiles-daemon for better GNOME integration. I might revisit this later to see if TLP offers better battery life.

Speaking of battery, I capped the charge at 80% in the BIOS to extend its lifespan. It’s reassuring that Slimbook sells replacement batteries, which could be useful down the line.

Final Thoughts

I’m happy with my purchase and would recommend the Slimbook EVO 14—especially if you value Linux support and European manufacturing.

Note: This text was re-phrased with AI (Le Chat by Mistral.ai to be precise).