Home - Mind We're hiring !

Linux and Open Source Solutions for Embedded Systems

Linux and Open Source Solutions for Embedded Systems

Series of Articles: "Improving embedded software development processes using Open Source software"


Basing an embedded device on Free and Open-Source Software (FOSS) brings many advantages, not the least of which is complete control over the software stack and free reuse of existing high quality solutions. However, it also means having to deal with large amounts of code, mainly coming from external parties. Coping with this can be a challenge for small embedded teams, used to smaller stacks developed in-house.

In this series of articles, we take a step by step tour of good software development processes and how to use them to improve your organization. We emphasize embedded development and point out particular pitfalls to avoid.


See also:

Part 1 / 4: The case for embedded FOSS and the consequences

Part 3 / 4: Embedded Software Testing


Part 2 / 4: The embedded FOSS software development process toolbox

By: Arnout Vandecappelle & Gian-Carlo Pascutto.


Like any software, the embedded software development process needs to be supported by development and maintenance practices. In the first part we saw why in modern embedded systems the use of Free and Open Source Software can no longer be ignored, and the difficulties this introduces in the software development process. In this part, we look at the software development process support tools that are suitable for embedded system development with FOSS. These include version control systems, issue tracking systems, documentation systems, managing the build process, and managing releases. Bringing these practices together allows you to deal efficiently with FOSS in your embedded software development.


1 Version control

Version Control Systems (VCSes) have a history as long as software engineering itself. There have always been many competing VCSes with different features, and even after all this time none of them emerge victorious.

The reason for these difficulties with VCSes is that they are used for several different purposes: back-up of the edits of your code, logging of what was done when, collaboration between several developers, being able to return to a known working situation, parallel development of independent features that can later be merged together, exploratory development, being able to remove things without loosing them, release management, configuration management, auditing/traceability (cfr. Signed-Off-By in the kernel), and probably more. In addition, the VCS is supposed to integrate and interact with other processes: issue tracking (bug is fixed by this commit), integrated development environment, build automation (buildbot checks out most recent integration version), validation and testing (make sure you have tested code in the repository), integration (check out particular versions of the dependencies), communication (sending out patches on mailing lists), authentication (who is allowed to commit what), project management, and more. None of the VCSes that currently exist cover all of this perfectly. However, serious progress has been made in the last couple of years.

One important way to categorize VCSes is centralized vs. distributed. In centralized VCSes, there is one central repository (usually on the network), and all commits go to that repository. In distributed VCSes, every developer has his own copy of the repository, and the repositories must be synchronized with each other explicitly.

Distributed VCSes are simply better. They make it possible to do any repository operation (commit, log, checkout) even when off-line. They are faster (less network traffic and less server load). They have a lower commit threshold. And you get back-up of your repository for free. Also, distributed VCSes still allow you to work in a centralized way. In a distributed VCS, a push to the central repository is equivalent to a commit in a centralized VCS. The only disadvantage of distributed VCSes is that they are a bit more complex to understand.

Distributed VCSes are essential when youíre dealing with open-source software. You usually need to make patches to some of the components you integrate, and these modifications obviously need to be in your VCS. You could maintain just the patches in the VCS, but thatís not very convenient. The logical approach is to create a repository for the upstream project, and commit your modifications into that repository. With a centralized VCS, there is no explicit relation between your own repository and the upstream project. As a workaround, the concept of vendor branches was introduced. Essentially, the upstream releases are dropped into a separate branch of the VCS which can then be explicitly merged into the project. Doing such a merge is generally very awkward, and a centralized VCS tends to make it more difficult rather than helping you. In a distributed VCS, on the other hand, an upstream is just another remote repository from which you can pull, and the VCS is built around making this process easy.

We advocate git as the VCS of choice. It is distributed, fast, powerful, and has a lot of traction in the community. Its disadvantages are that windows support is lagging behind (but itís catching up) and that the simple usage patterns are not immediately obvious from the documentation. The latter, however, can be avoided by using a GUI (e.g. git-gui and gitk) or by writing helper scripts to support your development process.

Sometimes you need to interact with an environment that is not git. For instance, an upstream project can still use Subversion, or your team obliged to use Perforce. In these cases, it is still possible to use git. Interacting with a centralized VCS is easy since a centralized commit is equivalent to a push. Therefore there are wrappers for the most common centralized VCSes: git-svn, git-cvs, git-p4, Ö. If several people in the team are willing to use the distributed VCS, you can organize one central repository that pushes to the centralized VCS and operate in a distributed way around that repository.

Your repository should be organized in a way to facilitate working with upstream. If youíre going to send patches upstream, it is best to create a topic branch for these features. topgit is a wrapper around git that automates a lot of the work regarding topic branches. In darcs and mercurial there is also support for patch queues, but Iím not familiar with them. The idea of a topic branch is that you work on the patch in a separate branch of the repository. This branch has a base, which is a merge of all the branches (specifically other topic branches) that this branch depends on. To do a merge, first all changes of the dependencies are merged into the base (allowing you to resolve conflicts due to that merge), then the topic branch changes are merged into the new base. Whenever you want to send changes upstream, all the commits of the topic branch are squashed into a single patch against the topic branchís base. This allows you to commit often while doing development, but still create a single clean patch to send upstream. It also allows you to integrate comments from the upstream and keep a revision history. If you donít have a topic branch, you need to use something like git rebase -i to get a proper patch after integrating comments, but that means you rewrite historyÖ

Modern VCSes offers some interesting advanced features:

  • bisect: given a version that has a problem and an earlier version that doesnít have the problem, bisecting allows you to find the commit that caused the problem.
  • rebase: when merging in changes from upstream development, you can rebase against the upstream instead of merging. This means you rewrite history and re-apply your changes on top of the new upstream version. That way, each individual change is retained instead of squashing everything into a single merge commit. This can make it easier to resolve conflicts, since you see each patch individually rather than the whole merge together. However, rewriting history is generally a bad idea; at least, you should do the rebase in a new branch so the old history isnít lost.
  • interactive commit: often, while developing some features, you simultaneously develop something else or fix some other bug. Ideally, you want to have a separate commit for each feature or fix, with separate commit messages. This gives a better change history and more fine-grained options for cherry-picking. Interactive commit allows you to specify exactly which parts of the working tree to commit, or even to edit the individual patches that will be committed.
  • cherry-pick: when you have different development branches (e.g. when thereís upstream development going on), you often want to take one specific change from another developer without merging his entire tree (especially if it isnít stable yet). Cherry-picking allows you to select exactly which commits to take from another branch, and allows for a cleaner merge later on. darcs is particularly good at cherry-picking.
  • stash: sometimes while working you suddenly want to try something else or need to develop another feature first. You could commit your current work on a new branch and then start on the other job, but this is a bit labour-intensive and pollutes history. Stashing just stores your working tree in a temporary place and checks out a clean tree. When youíre done making the other changes, you can unstash the previously stashed work and merge it back with the new working tree.

The final question with the VCS is how to organize the repositories. In embedded systems using FOSS, we are always combining components, some which are open source upstream, some which are local. In this situation, it is best to make a separate repository fore each component. For the ones with an upstream, clone an upstream repository. For the local ones, split them up into parts which you want to freeze individually. For the local ones, avoid splitting it up too much. Splitting makes your life harder; it is only really worthwhile if the same component is used in completely independent projects. Finally there is an integration repository that tracks the others (using scripts or submodules) and contains the top-level build scripts to compose the final image. Often youíll have only one component repository (your application), which is the integration repository as well. Within each repository, keep development on a branch (or locally cloned repository) where non-working commits are allowed. Commits to the trunk (or central repository) should always be working. This makes sure that any developer or integrator can check out the trunk and be sure it will work.


2 Issue tracking

An issue tracking system is basically a version control system for comments you have on the software. Its main purpose is to track bugs: who found it, what can be done to reproduce it, what has been tried, and how it was fixed. However, also missing features can be tracked with it. This makes it easy to discuss these features, keep track of when they were implemented, and plan them together with the bug fixes (comparing priorities etc.).

A good issue tracking system is also a project planning tool. It allows you to define milestones with target dates and assign issues to the milestones. It allows you to assign developers to the issues. And it allows you to estimate how much time is required to solve the issue, which helps to determine if the target date is realistic.

The issue tracking system also tracks dependencies between issues. This is particularly important for new features. Often a requested feature also requires some other changes in the software infrastructure. Those other changes may be done by different developers or may be required for other features as well. Splitting the issue into separate items with dependencies between them allows you to track them properly.

The issue tracking history and the software revision history are clearly tightly correlated. A commit is usually related to some issue (implementing a new feature or fixing a bug), and each issue is either invalid or gets resolved by a number of commits. These links should be made explicit. This is typically done by adding something like ĎResolves #394? in the commit message that resolves issue 394, and by copying the commit message (with the commit revision number) when closing the issue. Ideally this linking should be done automatically, but AFAIK no VCS/issue tracker combination exists that does that.

Several good opens-source issue trackers exist, but none are perfect. Bugzilla and trac are certainly good. tracís big disadvantage is that it doesnít work well for multiple projects (i.e. multiple repositories), although this should be supported in release 0.12 (Real Soon Now!). Bugzilla is very complete when it comes to issue tracking, but it is big and complex, and it doesnít link with the VCS. Bugs Everywhere looks like an interesting alternative with git integration (but no web interface as of yetÖ), but we havenít tried it until now.


3 Documenting

Embedded software typically has a long lifetimes, and also sees several generations of maintainers. Open source software is in addition exposed to many developers simultaneously. Therefore, the software should be well documented. On the other hand, typically the user interface of embedded systems is very limited, so the burden on user manuals (for the software!) is smaller.

We distinguish code documentation from design documentation and user documentation. Code documentation describes what a particular piece of code does. Design documentation describes the overall concepts of a component, and how different components fit together. User documentation tells the user what to do and makes abstraction of the implementation.

Code documentation goes directly into the code. Inline comments should be used to document hidden assumptions and side-effects, not to explain the obvious: donít document what you can see Ė document what you canít see. As much as possible, the code should be self-documenting.

Assertions are a powerful tool to make the code self-documenting: they are unobtrusive, they can be optimized away, and they improve debug support for free. Many frameworks include specific support for assertions. In the kernel, use BUG() and WARN(). In glib-based code, use g_assert() and g_assert_not_reached(). In other code, use assert().

Functions, data structures, macros and global variables should be documented with a description of their function in front of the symbol. Always use doxygen comments. It allows you to automatically generate cross-linked documentation in HTML and other formats. Even more importantly, most editors and IDEs recognize the format and do some highlighting and cross-linking immediately in the source code. So even if you donít generate the HTML documentation, use the doxygen format. Some frameworks have their own documentation tools. The kernel uses kerneldoc, Gnome (and therefore GStreamer) uses gtkdoc, and Java uses javadoc. However, they are all very similar:

  /**
  * <kerneldoc and gtkdoc require the symbol to be specified>
  *
  * Short one-sentence description.
  *
  * Full description.
  *
  * @param foo Documentation for first parameter.  For kerneldoc and gtkdoc, it's @foo.
  */
  

For design documentation, try to convince the customer to put it on a wiki, or at least have it versioned into the repository. The big risk with design documentation is that it diverges from reality. Not much can be done to avoid that.


4 Building

make is the basic tool, but not enough. IDEs (Visual C, kdevelop, eclipse) have built-in build tools, but they are not good for reproducability. autotools is OK for simple things, but hell to understand what it really does – it is supposed to make things portable but rarely does for non-trivial dependencies. SCons is OK but slow. Bitbake likewise. CMake is really good, helps for cross-platform, but a bit difficult to add features. Probably the best option is to use CMake for checking dependencies and simple compilation, and shell scripts for actually doing stuff. If you do use make, avoid recursing into subdirectories like autotools does: it makes the build process harder to understand and looses cross-directory dependencies. Recursive make can be useful for things like setting up the top-level makefile depending on the configuration, but for such situations a shell script is usually more appropriate.

Dependencies are tricky in your build system. Consider the following problems, and verify that the build system acts appropriately when these situations occur.

  • What happens when you change one of the build system’s files or scripts?
  • What happens when a compiler changes, or another tool that generates files?
  • What happens when an external dependency (library, …) changes?
  • What happens when you change some environment variable that affects the build?

Dependencies on the build environment are even trickier. You want the build to be reproducible. You don’t want up to end up in the situation that it works when you compile on one machine but no longer works when you compile on another machine. Several solutions exist.

  • Always use the same build machine. Creates a bit of a bottleneck. Also if you upgrade the build machine, reproducability may be a problem.
  • Build or download the build tools. This approach is taken by e.g. buildroot and openembedded. Then, of course, a clean build takes extremely long – usually more than a day. Debian’s pbuilder and emdebian also use this approach, but they just download binaries from the debian repositories. pbuilder puts them in a chroot environment, but emdebian installs them right into the filesystem. The latter is more efficient space-wise, but can be a problem for reproducability. In a clean build with pbuilder, downloading can take a long time too, but that can be alleviated by installing an apt cache or proxy or by caching the pbuilder image. Note that in all of these approaches (although less likely with pbuilder), you may accidentally use some files outside the staging area, which removes the reproducability again.
  • Create a build image. Compilation either chroots into this image, or executes it in a virtual box. The source code is accessed over a bind-mount or a network drive. This gives maximum reprodicability, on condition that the build image is read-only and is stored whenever an upgrade is done, e.g. by putting it in a VCS repository. The chrooted build image is basically the pbuilder approach, but pbuilder also supports generating the image if you don’t have it yet.

In the end, you need to create an image and put it on the target device. Some recommendations.

  • Use a packaging system (ipkg/opkg) if at all possible, because it makes it much easier to reliably install modified versions while debugging. If you just copy individual files over, you will end up in the situation that you’re accidentally executing the old version.
  • Build the target image from scratch, and make sure that it is done by a script that is in the VCS. You may need to create two images: one for the NAND Flash, and one for the external flash (SD card, CompactFlash card, USB stick, disk).
  • The easiest and fastest is usually to create the root file system in a directory first, and copy it into the mounted image afterwards. That gives you direct access to the files if something goes wrong during the build.
  • To create a disk image for the external flash, you’ll also need to partition it. Use the following pseudo-script:
  dd if=/dev/null of=target.img bs=512 count=1 seek=$total_size # Create an empty (sparse) file of the correct size
  sfdisk --force target.img  < partition_table                  # Create the partition table
  LOOPDEV=`losetup -f 2>/dev/null`
  losetup -o <offset of partition> $LOOPDEV target.img          # Set up loopback block device within the image file
  mkfs.ext3 $LOOPDEV                                            # Create the filesystem
  mount $LOOPDEV mnt
  cp -ax rootfs/. mnt                                           # Fill the filesystem
  
  • For jffs2 and cramfs, you’ll need to mkimage them from the file tree instead of copying like above.
  • For in-the-field upgrades, you will not want to overwrite the complete image (to keep config data). Extract the upgrade image from the total image.


5 Releasing

Release management is more a management problem than a technical problem. Still, here are some important considerations.

  • You must guarantee reproducability. Otherwise, when an error is discovered in the field, it will be impossible to do something about it if it’s not easily reproducable in your current development tree.
  • Use the VCS to help with reproducability: tag the release, i.e. the image build but also all components. Alternatively, let the image build check out a particular revision of each module (cfr. git submodules).
  • Always do the release build from a clean state (fresh checkout, fresh build environment).
  • Make sure you have some semi-automated testing to verify that the release actually works: does it boot? Are all components there? Does it interact properly with other devices? Does an upgrade of an existing device still work?
  • Make sure that releases are planned in a way that leaves sufficient time to do a clean build and the release test.
  • Occasionally test reproducibility: regenerate an old release and check if it is indeed the same (e.g. compare the disk images).

Reproducability is a lot simpler if you archive the whole build environment. This takes a lot of disk space, but it’s really worth spending $50 on a terabyte to store 50 releases without worry (note that this doesn’t have to be expensive fast, backed-up, RAIDed NAS space). Make sure you also build and store the debug packages, because that’s what you’re going to need when you need to reproduce the release.

If several configurations need to be supported, it really pays off to have automated nightly builds and tests for all configurations. You don’t want to discover that some configuration doesn’t work on the day that you’re making the release.

Make sure that the configuration is also somewhere in VCS. You can have e.g. several kernel config files in your image build VCS and copy them into the kernel source tree as appropriate. If you build several configurations for a single release, make sure this is done automatically so that all configurations are stored in VCS.


6 Conclusions

Embedded software development is more than just writing code. You have to manage the process with a version control system, an issue tracking system, Documentation, a build system and Release management. Fortunately, there are open source tools supporting these processes. In the next instalment of this series, we’ll discuss the software development aspects that have to do with actual programming: testing, debugging and optimization.



Part 3 / 4: Embedded Software Testing

© copyright 2002 - 2017 Mind NV. [ home ] [ contact ]
All registered Trade Marks are the property of their owners.