Fripuck Devlog

Hi there 👋 I'm Uhrbaan, and for my bachelor thesis, I am working on updating the e-puck2's codebase and API to better fit the needs of my university.

This blog serves as a monthly update on the progress I make on that project over the next year.

What is the e-puck ?

The robot was designed by the EPFL, who describe them the following:

The e-puck is an educational robot that helps generations of students learn about embedded systems and robotics. First developed at EPFL in 2004 by Francesco Mondada and Michael Bonani, a new version was released in 2018, produced by GCtronic in Ticino. [source]

Essentially, they are small robots equipped with many small sensors to help students make their first steps into mobile robotics.

The e-puck2 robot and its many sensors

The e-pucks used at my university are built and maintained by GCtronic, a small Ticinese company. They are also the authors of the main code running on these robots and also produced a C API to talk to the robots over the network. This API was later replaced by a Python API made by a university student during their bachelor thesis (just like me), which later also introduced computer vision capabilities through YOLO.

What is Fripuck ?

Fripuck is my addition to this educational tool (if it works out 😅) ! The idea to work on the e-puck2 as my bachelor thesis rose out of two frustrations I encountered while working with the robots: (1) the blocking nature of the API and (2) the focus on real-time robotics, limiting the ability for data analysis/signal processing.

Good software is born of frustration

— Someone, probably

A second reason I started this project, and why I am not only modifying/rewriting the API but also the firmware, is that I got really interested in embedded development. The university sadly doesn't have courses about embedded programming, so I figured the best I could do was learn that subject on my own, and with my limited time, doing it during my bachelor thesis seemed to be the best option.

Fripuck itself is the combination of three pieces of software: the firmware of the STM32F4 chip that controls the robot and all the sensors, the firmware of the ESP32 responsible for the communication over Wi-Fi with the student's computer, and finally the API (Python or Go for testing). The name of the project is a combination of e-puck and Fribourg, the university with which I am doing my thesis.

What do I want to achieve? I technically already started the project last semester, so about 4 months ago, just as an “exploratory” phase. This helped me to read a bit through the existing code base, get familiar with the hardware, and explore the limitations of the chip. For example, I got a Lua VM working on one of the two chips and controlling the LEDs with it.

For the moment, I am mostly working on re-implementing the firmware and the API to reach feature parity (mostly, some features I do not care about since the university doesn't use them) with the old software, the status quo, while still keeping some nice improvements, like the API being multi-threaded and asynchronous by default, and the firmware using a more modern build system/development environment (PlatformIO), moving away from a custom serialization protocol to use Flatbuffers and using a real-time operating system more common in the academic world (FreeRTOS over ChibiOS).

The progress can be tracked on GitHub at https://github.com/Uhrbaan/fripuck2 (at the time of writing, the table is mostly empty and the README.md isn't even fully finished yet).

Ideally, if I have the time, I would like to add new features as well, like a full audio stream from the robot to the clients, which would enable voice commands; a Lua VM to make the robots work offline; or even audio playback, which could make the robots talk, maybe transforming them into AI assistants, who knows! 🤷 Another big goal would be to increase the video playback speed, but I doubt it is possible to achieve more than 10 or 15 frames per second.

What have I achieved so far?

Right now, most of the foundations have been laid out. I have a rather precise vision of the architecture of the whole system; the firmware is working, the Python API is working, and the robot can send and receive packets serialized as FlatBuffers. The only important foundational work that is remaining is managing the commands coming from the client.

Once that is working, there will be a lot of work to re-add all the sensors, although I will probably be able to use a lot of the existing code to achieve this.

What are you working on right now?

Currently, I am testing the telemetry process (the robot reads a sensor, packages it, sends it over the network, and the client receives it) by implementing the simple time-of-flight sensor and sending that data over the network. Once that is working, I will proceed to work on the commands.

Wrapping up So far, I've been really enjoying the process. C was the first language I ever learned, and I hope this project is going to elevate my C skills to the next level. I am also having fun discovering the world of embedded programming, although I don't find it as welcoming as other domains, since documentation and tutorials aren't as readily available. I often find myself having to read through multiple example projects to understand what I am supposed to do, but on the bright side, code reading and comprehension is equally as important of a skill to train as writing it (since AIs are going to take that away apparently 🤷).

Next month, I'll probably go a bit more into the architecture I've planned and some technical challenges I've come across. Anyway, if you stayed through the whole text, thank you for your time !