Kieran’s last talk said “libcamera exists, please use it.”

sched slides

Dan worked on the camera driver for the MS Surface tablet. It is described in ACPI, THe connection between camera and CSI input is not properly described though – it’s hidden in a binary blob. There’s a struct that explains the meaning of this binary blob, and with that the ports and endpoints can be added manually in software.

The camera pipeline is quite complex because there are many things that need to be (or can be) configured on the camera itself and on the image processor on the SoC. All these are exposes by the kernel as v4l2 devices and subdevices. libcamera brings it all together. You define the pipeline in libcamera, then simpler global settings can be applied to each pipeline stage as needed.

To integrate in existing applications, there’s a GStreamer element – that makes it possible to use libcamera with e.g. cheese. But you really need applications to support all features. Instead of the application to have to control v4l2 directly, libcamera can abstract this. But pipewire is also involved in the camera pipeline. libcamera is linked to pipewire, so applications can use the pipewire interface.

For user-facing applications, pipewire is probably the way to go. For most embedded application, GStreamer and Qt are usually good abstractions. Only for actual camera hardware, libcamera should be used directly. In some cases you need both, e.g. OpenCV.

For browsers, e.g. videoconferencing, WebRTC goes into pipewire’s XDG camera portal, and that calls into libcamera.

[More examples and demos of where libcamera + pipewire is used. See the video!]