EOSS2023 – Automated Full System Testing on Hardware With OpenQA (Laurence Urhegyi – Codethink)

EOSS2023 – Automated Full System Testing on Hardware With OpenQA (Laurence Urhegyi – Codethink)

 

(22) Arnout Vandecappelle | LinkedIn , one of our sr embedded consultants is reporting back from this leading conference.

 

In Automotive, automated functional testing on hardware matters a lot. OpenQA is a framework to make testing framework easier. Lack of automated testing leads to fear of updating e.g. the kernel, because things may break.

arnout vandecapelle, mind (essensium division), embedded software consultant
Arnout Vandecapelle
27/06/2023

Increasing software complexity is an overarching challenge in automotive. This increase in complexity leads to more chance of problems, hence more need for testing. In particular OTA updates are problematic. Full system updates are generally avoided.

Codethink is a software services company, specialised in open source and build/test/integration engineering. As a consulting, they get to see many different automotive projects.

The theory of constraints states that the throughput of a system is limited by the throughput of its tightest bottleneck. Improvements there improve the entire system. This theory applies to the software lifecycle as well. Codethink sees that in many projects the testing is the main bottleneck. Improvements in testing have the biggest impact. On the one hand this is about making testin more reproducible, but also “left shift” i.e. doing testing earlier in the development cycle.

A lot of projects still rely on manual testing. Hardware is often also a scarce resource, especially early in the development. Therefore, emulation is important to avoid needing the hardware. In automotive, the vehicle environment itself is the most expensive one. Any issue that could have found earlier is a huge waste of time. Conversely, if you improve testing and find issues earlier, this is also difficult to quantify because the costly issue never arises.

Codethink has talked about this at ELC 2021 and FOSDEM 2022, so this is an update compared to those talks.

OpenQA is from OpenSUSE and quite established there. It’s designed for desktop operating systems and centered around UI, based on screenshot comparisons (“Needles”). A needle defines areas of the screen to click, to match, and to mask. The workflow of moving through screens can be established quite easily this way. The tests are run in qemu so no hardware needed.

The OpenQA server stores the needles (they can be quite large) and has a dashboard for the results. Tests are triggered from a gitlab (or other) pipeline. The dashboard has very good UI for looking at failed tests, what went wrong there, and modify needles if needed.

The same tests should also be run on hardware because behaviour may be different there. That requires hardware orchestration. They use LAVA for that. To capture the screenshots, VNC is used. This was based around development boards.

To run on representative devices, the tests must be integrated in the actual system image. For this, they developed QAD, a very small daemon that can be added to production hardware, i.e. doesn’t require a special test version of the software. It captures all displays attached to the system (using DRM, so not if wayland is used – wayland is future work), and sends input events: touch events and swipe events (using the standard virtual evdev Linux interface). These are made accessible through an HTTPS API. This matches pretty well with OpenQA needles.

In addition to testing, QAD can also be used for remote access to the device. This makes shared access to hardware resources easier.

QAD is needed because you don’t want to install VNC or wayland remote desktop in the production software.

With QAD, the gitlab runner is running on a PC or RPi sitting next to the test rig. The runner flashes the rig or rack, it does the orchestration.

When setting up tests for the first time, the biggest chore is creating the needles. For this, they created a QAD Web UI. This allows to manually captured the needles by inputting to the actually taken screenshots. Note that this is currently not released as open source yet.

For future work, they will also need to handle the environment of the unit under test. The most important here is the CAN interactions. The idea is to define a CAN simulator that is steered by a JSON file that specifies the messages. This is only half-implemented at the moment.

One big bottleneck in manual tests of infotainment systems is tests of USB plug events. This requires manually plugging and unplugging the USB device. They created a USB Switcher that allows to do this automatically. It switches two devices, but you can daisy chain. It’s completely open hardware.

The test environment of a rig is a lot of discrete hardware and cables. Codethink has developed testing-in-a-box that contains a bunch of interfaces: Serial, CAN, USB switch, USB hub, and also a HID emulation for USB keyboard and mouse. It also has wifi and bluetooth but haven’t used yet. It comes with a gitlab runner and OpenQA agent.

More info on this topic:

https://eoss2023.sched.com/event/1Lbkq/automated-full-system-testing-on-hardware-with-openqa-laurence-urhegyi-codethink

Presentations

Drop the docs and embrace the model with Gaphor Fosdem '24 - Frank Van Bever 20 March, 2024 Read more
How to update your Yocto layer for embedded systems? ER '23 -Charles-Antoine Couret 28 September, 2023 Read more
Tracking vulnerabilities with Buildroot & Yocto EOSS23 conference - Arnout Vandecapelle 12 July, 2023 Read more
Lua for the lazy C developer Fosdem '23 - Frank Van Bever 5 February, 2023 Read more
Exploring a Swedish smart home hub Fosdem '23 - Hannah Kiekens 4 February, 2023 Read more
prplMesh An Open-source Implementation of the Wi-Fi Alliance® Multi-AP (Arnout Vandecappelle) 25 October, 2018 Read more

 

News