18 Oct Embedded Recipes 2023 Day 2 – part 2, Paris, France
Embedded Recipes 2023, Paris, France – Day 2, part 2
This article concludes the series reporting the talks at Embedded Recipes 2023. There is one full-length talk, followed by three short talks.
The video for the live stream can be viewed on youtube. At the time of this writing, the cut clips for each talk are not available yet.
Display support on Embedded Systems: a tour of Linux implementations and limitations – Neil Armstrong, Linaro/Qualcomm
This talk is not about DRM (Direct Rendering Manager) and GPUs, there are other talks about that. It’s about the display pipeline, which in embedded is generally separate from the above. A pixel is read from memory, usually in a line, then perhaps processed a little (e.g. colourspace conversion), then put in a timing buffer. From the timing buffer it is read out at exactly the right time to send it to the display itself, together with the synchronisation and blanking signals.
The display pipeline has sources and sinks – source is emitter, sink is receiver of a display signal. They have timings, that’s the additional synchronisation information. A packet is the minimum number of pixels that are transferred together, and the lanes are the bits that go in parallel.
The display engine consists of a memory reader; a planes blender that does color correction, scaling and similar, and blending that mixes different planes; timings; a protocol; and the physical transmission. In Linux, the memory is managed by the GEM (Graphics Execution Manager) in DRM. Planes scaling and operations are managed with Universal Planes. The blending is performed in the CRTC block. The timings are added in the encoder block. This is the last part of preparing the image itself. The next block, the bridge that implements the protocol, prepares the image for transmission. Several bridges can be connected to each other – e.g. DSI output can be converted to DisplayPort by a separate hardware block. The physical transmission finally is represented in the PHY subsystem. A multi-head system is represented by multiple CRTCs that (possibly) connect to a MUX that can choose the corresponding CRTC(s). It’s often hard to find out from datasheets how the hardware blocks are exactly connected together in the SoC.
Description of HDMI, DisplayPort, MIPI DSI, LVDS protocols – see slides. Here we highlight some salient points about all these protocols.
HDMI is based on DVI-D, with timings that allow for higher frequencies and higher resolutions. It also adds signalling packets and supplementary signals that make it possible to transport more than video. One example is CEC (Consumer Electronics Control), a bi-directional channel to send commands to the peer device. There are many additional features specified by HDMI which are not supported in Linux because the spec is closed and the features are complex. E.g. Ethernet channel, audio return. From HDMI2 there’s even more that is not supported, e.g. 8K60, higher resolutions. The reason is that the specs are closed.
DisplayPort is a modern protocol with fully packetised data transmission. It even includes a clock signal in a micro-packet. It’s extensible and can e.g. carry USB. DisplayPort is much better supported in Linux, but still very complex. It is well supported on x86, but much less on other (ARM) platforms.
MIPI DSI is the interface used in many embedded systems for both camera (CSI) and display (DSI). They share a common PHY (D-PHY). It’s basically implemented by using an initialisation table, which is pretty much a binary blob. It’s not possible to implement advanced features or calibration. If something doesn’t work, it’s not possible to debug it. In general, also, the SoCs themselves don’t really document the display protocols. Only STM32MP1 really documents everything. So in most cases the only documentation is the vendor’s kernel fork. There is also conformance qualification which is impossible to pass for open source, so each product vendor has to do it by themselves.
LVDS is a quite simple standard. It used to be used in laptops, but nowadays is replaced by eDP (i.e. DisplayPort).
In general, the lack of access to standards is a huge problem for open source support of all of this.
[ Olivier’s personal thoughts: Pity that displays are so badly supported. Our own bad experience with LCD display support is thus not exceptional, retrospectively seen. In other words: for integrating display panels: beware, there be dragons. ]
How to update your Yocto layer for embedded systems? – Charles-Antoine Couret, Mind
The first thing to make sure of is that you can test everything, and that you foresee enough time to do this. Make a list of everything that must be checked to survive the migration. Next, read the release notes to see what changed between the previous version and the new one. This way you detect major incompatibilities, like the _ to : change that came with the honister release. For that specific one, there is fortunately a migration script. There are also variables which can be renamed or defaults that change.
In general, it’s not too complex, but a lot of small tasks and a lot of tests to perform.
To reduce the effort, make sure you upstream your changes. That way the maintainers will take care of handling any breaking changes. Also other people may make improvements or bugfixes to your feature.
For your own applications, it helps to test them continuously on recent systems (Fedora, Debian). That allows you to already weed out most issues that may arise from updated GCC, libc, etc.
Snagboot: vendor-agnostic board flashing & recovery – Romain Gantois, Bootlin
With a running Linux system or a working bootloader, it’s relatively easy to update the software. However, if it’s bricked, or never been flashed before, you need something that works at ROM code level. SD card is a good reliable method, but not all boards have it. For production, it’s also a cumbersome way, and both the card and its holder will get damaged over time. Most boot ROMs, on the other hand, support booting over USB (gadget). In this mode, which is entered automatically when no valid firmware is found or when some pins are strapped at boot time, the SoC listens to commands on USB. However, every vendor has its own protocol. So if you have different boards with different SoCs, you need to use different tools with different CLIs.
Snagboot is a unified tool that supports multiple SoCs. It’s available through pip install snagboot. It focuses on recovery (snagrecover), i.e. putting a bootloader in DRAM. From there, the traditional flashing method from the bootloader (using DFU, fastboot, UMS) can be applied. This is with the snagflash tool.
snagrecover CLI is very simple – just the files to load. The board config is in a yaml file. snagflash has a bit more complicated CLI because it supports multiple protocols which work differently.
snagboot exists for only a couple of months, and there have already been many contributions.
It probably only works on Linux, it wasn’t designed for windows.
Automotive Grade Linux Status and Roadmap – Scott Murray, Konsulko
AGL (Automotive Grade Linux) was originally focused on In-Vehicle Infotainment (IVI), but now also includes Instrument Cluster (i.e. the display behind the steering wheel) and Telematics (i.e. reporting vehicle status to the vendor). It’s a collaborative project with 150 members. It is currently in use in cars by Toyota and Subaru.
The AGL distro is based on yocto. There are two releases per year. It uses pipewire and wireplumber to integrate audio and video. It has many examples available to showcase various functionality that is needed. GRPC is used as common inter-process communication system for the various components. It also heavily emphasizes security with SELinux.
In the future, it will showcase the use of containers and virtio integration. In particular virtio-gpu to allow the UI to run in a container or VM. For the UI, Flutter was chosen. For integration of realtime aspects, it uses the Xen hypervisor and RTOS integration. There is also a need to improve documentation.