Table of Contents:
Development tools are much more than just a text editor and a compiler. Correct use of the right tools can drastically ease debugging and tracking down of complex problems with memory allocation and system calls, amongst other things. Some of the most commonly used tools are described below; other tools exist for more specialised use cases, and should be used when appropriate.
- Compile frequently with a second compiler.
- Enable a large selection of compiler warnings and make them fatal.
- Use GDB to debug and step through code.
- Use Valgrind to analyse memory usage, memory errors, cache and CPU performance and threading errors.
- Use gcov and lcov to analyse unit test coverage.
- Submit to Coverity as a cronjob and eliminate static analysis errors as they appear.
- Use Clang static analyser and Tartan regularly to eliminate statically analysable errors locally.
GCC and Clang
GCC is the standard C compiler for Linux. An alternative exists in the form of Clang, with comparable functionality. Choose one (probably GCC) to use as a main compiler, but occasionally use the other to compile the code, as the two detect slightly different sets of errors and warnings in code. Clang also comes with a static analyser tool which can be used to detect errors in code without compiling or running it; see Clang static analyser.
Both compilers should be used with as many warning flags enabled as possible.
Although compiler warnings do occasionally provide false positives, most
warnings legitimately point to problems in the code, and hence should be fixed
rather than ignored. A development policy of enabling all warning flags and
also specifying the
-Werror flag (which makes all warnings fatal to
compilation) promotes fixing warnings as soon as they are introduced. This
helps code quality. The alternative of ignoring warnings leads to long
debugging sessions to track down bugs caused by issues which would have been
flagged up by the warnings. Similarly, ignoring warnings until the end of the
development cycle, then spending a block of time enabling and fixing them all
Both GCC and Clang support a wide range of compiler flags, only some of which
are related to modern, multi-purpose code (e.g. others are outdated, or
architecture-specific). Finding a reasonable set of flags to enable can be
tricky, and hence the
AX_COMPILER_FLAGS enables a consistent set of compiler warnings, and also
tests that the compiler supports each flag before enabling it. This accounts
for differences in the set of flags supported by GCC and Clang. To use it, add
configure.ac. If you are using in-tree copies of
autoconf-archive macros, copy
m4/ directory of your project. Note that it depends on the following
autoconf-archive macros which cannot be copied in-tree due to being
GPL-licenced. They must remain in autoconf-archive, with that as a built time
dependency of the project:
AX_COMPILER_FLAGS supports disabling
-Werror for release builds, so that
releases may always be built against newer compilers which have introduced more
warnings. Set its third parameter to ‘yes’ for release builds (and only release
builds) to enable this functionality. Development and CI builds should always
An easy way of determining whether this is a release version of a project is to
AX_IS_RELEASE([micro-version]). If this macro is used before
AX_COMPILER_FLAGS, the third parameter to
AX_COMPILER_FLAGS should not be
passed — it will be picked up automatically from
GDB is the standard debugger for C on Linux. Its most common uses are for debugging crashes, and for stepping through code as it executes. A full tutorial for using GDB is given here.
To run GDB on a program from within the source tree, use:
libtool exec gdb --args ./program-name --some --arguments --here
This is necessary due to libtool wrapping each compiled binary in the source tree in a shell script which sets up some libtool variables. It is not necessary for debugging installed executables.
GDB has many advanced features which can be combined to essentially create
small debugging scripts, triggered by different breakpoints in code. Sometimes
this is a useful approach (e.g. for
reference count debugging),
but sometimes simply using
g_debug() to output a debug message is simpler.
Valgrind is a suite of tools for instrumenting and profiling programs. Its most famous tool is memcheck, but it has several other powerful and useful tools too. They are covered separately in the sections below.
A useful way of running Valgrind is to run a program’s unit test suite under
Valgrind, setting Valgrind to return a status code indicating the number of
errors it encountered. When run as part of
make check, this will cause the
checks to succeed if Valgrind finds no problems, and fail otherwise. However,
make check under Valgrind is not trivial to do on the command line. A
can be used which adds a new
make check-valgrind target to automate this. To
use it, copy
m4/ directory of a project, add
@VALGRIND_CHECK_RULES to the top-level
make check-valgrind is run, it will save its results in
test-suite-*.log, one log file per tool.
Valgrind has a way to suppress false positives, by using suppression files. These list patterns which may match error stack traces. If a stack trace from an error matches part of a suppression entry, it is not reported. For various reasons, GLib currently causes a number of false positives in memcheck and helgrind and drd which must be suppressed by default for Valgrind to be useful. For this reason, every project should use a standard GLib suppression file as well as a project specific one.
Suppression files are supported by the
@VALGRIND_CHECK_RULES@ VALGRIND_SUPPRESSIONS_FILES = my-project.supp glib.supp EXTRA_DIST = $(VALGRIND_SUPPRESSIONS_FILES)
memcheck is a memory usage and allocation analyser. It detects problems with memory accesses and modifications of the heap (allocations and frees). It is a highly robust and mature tool, and its output can be entirely trusted. If it says there is ‘definitely’ a memory leak, there is definitely a memory leak which should be fixed. If it says there is ‘potentially’ a memory leak, there may be a leak to be fixed, or it may be memory allocated at initialisation time and used throughout the life of the program without needing to be freed.
A full tutorial on using memcheck is here.
cachegrind and KCacheGrind
cachegrind is a cache performance profiler which can also measure instruction execution, and hence is very useful for profiling general performance of a program. KCacheGrind is a useful UI for it which allows visualisation and exploration of the profiling data, and the two tools should rarely be used separately.
cachegrind works by simulating the processor’s memory hierarchy, so there are situations where it is not perfectly accurate. However, its results are always representative enough to be very useful in debugging performance hotspots.
A full tutorial on using cachegrind is here.
helgrind and drd
helgrind and drd are threading error detectors, checking for race conditions in memory accesses, and abuses of the POSIX pthreads API. They are similar tools, but are implemented using different techniques, so both should be used.
The kinds of errors detected by helgrind and drd are: data accessed from multiple threads without consistent locking, changes in lock acquisition order, freeing a mutex while it is locked, locking a locked mutex, unlocking an unlocked mutex, and several other errors. Each error, when detected, is printed to the console in a little report, with a separate report giving the allocation or spawning details of the mutexes or threads involved so that their definitions can be found.
helgrind and drd can produce more false positives than memcheck or cachegrind, so their output should be studied a little more carefully. However, threading problems are notoriously elusive even to experienced programmers, so helgrind and drd errors should not be dismissed lightly.
Full tutorials on using helgrind and drd are here and here.
sgcheck is an array bounds checker, which detects accesses to arrays which have overstepped the length of the array. However, it is a very young tool, still marked as experimental, and hence may produce more false positives than other tools.
As it is experimental, sgcheck must be run by passing
Valgrind, rather than
A full tutorial on using sgcheck is here.
gcov and lcov
gcov is a profiling tool built
into GCC, which instruments code by adding extra instructions at compile time.
When the program is run, this code generates
output files. These files can be analysed by the
lcov tool, which generates
visual reports of code coverage at runtime, highlighting lines of code in the
project which are run more than others.
A critical use for this code coverage data collection is when running the unit tests: if the amount of code covered (e.g. which particular lines were run) by the unit tests is known, it can be used to guide further expansion of the unit tests. By regularly checking the code coverage attained by the unit tests, and expanding them towards 100%, you can be sure that the entire project is being tested. Often it is the case that a unit test exercises most of the code, but not a particular control flow path, which then harbours residual bugs.
lcov supports branch coverage measurement, so is not suitable for demonstrating coverage of safety critical code. It is perfectly suitable for non-safety critical code.
As code coverage has to be enabled at both compile time and run time, a macro
is provided to make things simpler. The
macro adds a
make check-code-coverage target to the build system, which runs
the unit tests with code coverage enabled, and generates a report using
AX_CODE_COVERAGE support to a project, add
configure.ac. The macro itself cannot be copied to the
m4/ directory due to
being GPL-licenced. Instead, the project must have a build time dependency on
autoconf-archive (version 2014-10-15 or later).
Documentation on using gcov and lcov is here.
Coverity is one of the most popular and biggest commercial static analyser tools available. However, it is available to use free for Open Source projects, and any project is encouraged to sign up. Analysis is performed by running some analysis tools locally, then uploading the source code and results as a tarball to Coverity’s site. The results are then visible online to members of the project, as annotations on the project’s source code (similarly to how lcov presents its results).
As Coverity cannot be run entirely locally, it cannot be integrated properly into the build system. However, scripts do exist to automatically scan a project and upload the tarball to Coverity regularly. The recommended approach is to run these scripts regularly on a server (i.e. as a cronjob), using a clean checkout of the project’s git repository. Coverity automatically e-mails project members about new static analysis problems it finds, so the same approch as for compiler warnings can be taken: eliminate all the static analysis warnings, then eliminate new ones as they are detected.
Coverity is good, but it is not perfect, and it does produce a number of false positives. These should be marked as ignored in the online interface.
Clang static analyser
One tool which can be used to perform static analysis locally is the Clang static analyser, which is a tool co-developed with the Clang compiler. It detects a variety of problems in C code which compilers cannot, and which would otherwise only be detectable at run time (i.e. using unit tests).
Clang produces some false positives, and there is no easy way to ignore them. The recommended thing to do is to file a bug report against the static analyser, so that the false positive can be fixed in future.
A full tutorial on using Clang is here.
However, for all the power of the Clang static analyser, it cannot detect problems with specific libraries, such as GLib. This is a problem if (as recommended) a project uses GLib exclusively, and rarely uses POSIX APIs (which Clang does understand). There is a plugin available for the Clang static analyser, called Tartan, which extends it to support checks against some of the common GLib APIs.
Tartan is still young software, and will produce false positives and may crash when run on some code. However, it can find legitimate bugs quite quickly, and is worth running over a code base frequently to detect new errors in the use of GLib in the code. Please report any problems with Tartan.
A full tutorial on enabling Tartan for use with the Clang static analyser is here. If set up correctly, the output from Tartan will be mixed together with the normal static analyser output.
Development containers using
Developers who need to build packages for foreign architectures like ARM can
use the Apertis devroots and the
devroot-enter tool to emulate native
compilation from the Apertis SDK using containers and QEMU. Devroot is file
system hierarchy based on a build of the Apertis system. Devroot contains the
same binaries as the target image, but with additional tools pre-installed,
such as native compilers, debuggers and other development tools.
By default, the Apertis SDK ships with a pre-installed
armhf devroot, but
different devroots can be downloaded and used.
To enable this workflow, Apertis SDK provides
devroot-enter, a wrapper for
systemd-nspawn which sets up a namespace container in a devroot of user’s
devroot-enter has an advantage over
chroot of fully
virtualising the file system hierarchy, as well as the process tree, the
various IPC subsystems and the host and domain name. On the other hand,
compared to full virtualisation using QEMU or VirtualBox, development
containers do not provide any support for graphics, so they are better suitable
for system-level development.
devroot-enter script accepts the following arguments:
devroot-enter DEVROOT [OPTIONS...] [COMMAND [ARGS...]]
DEVROOTis a mandatory root directory of the devroot. This directory will be used as file system root for the container.
OPTIONScan be any options
systemd-nspawnnormally accepts. Use this to configure the container to your taste.
COMMANDis a command with optional arguments to run in the container; if none is specified, the default shell is run.
devroot-enter will mount a temporary directory in the container’s
shadowing any existing content in that directory.
Since the binary architecture inside the devroot is different,
LD_PRELOAD environment variable to prevent warnings from being
Apertis SDK images ship a current devroot under
/opt/devroot. For a container
to be useful, we recommend you to bind mount your working directory with all
necessary files into the container instead of copying files over:
devroot-enter /opt/devroot --bind=/home/user/project
For more information on options you can use, see the manual page for
systemd-nspawn with the
man systemd-nspawn command.
Here’s an example session showing how to build
dlt-daemon package using the
devroot. It assumes the unpacked source tree is present in the user’s home
directory, e.g. as the result of running
apt source dlt-daemon.
$ devroot-enter /opt/devroot/ --bind=/home/user/dlt-daemon-2.13.0 Spawning container devroot on /opt/devroot. Press ^] three times within 1s to kill container. host's /etc/localtime is not a symlink, not updating container timezone. user@devroot:/tmp$ cd /home/user/dlt-daemon-2.13.0 user@devroot:~/dlt-daemon-2.13.0$ dpkg-buildpackage -b dpkg-buildpackage: source package dlt-daemon dpkg-buildpackage: source version 2.13.0-0co7 dpkg-buildpackage: source distribution 17.12 dpkg-buildpackage: source changed by Andrew Lee (李健秋) <email@example.com> dpkg-buildpackage: host architecture armhf dpkg-source --before-build dlt-daemon-2.13.0 fakeroot debian/rules clean dh clean --buildsystem cmake --builddirectory=build dh_testdir -O--buildsystem=cmake -O--builddirectory=build dh_auto_clean -O--buildsystem=cmake -O--builddirectory=build dh_clean -O--buildsystem=cmake -O--builddirectory=build debian/rules build dh build --buildsystem cmake --builddirectory=build dh_testdir -O--buildsystem=cmake -O--builddirectory=build dh_update_autotools_config -O--buildsystem=cmake -O--builddirectory=build debian/rules override_dh_auto_configure make: Entering directory '/home/user/dlt-daemon-2.13.0' dh_auto_configure -- -DWITH_SYSTEMD=ON -DWITH_SYSTEMD_JOURNAL=ON cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DWITH_SYSTEMD=ON -DWITH_SYSTEMD_JOURNAL=ON -- The C compiler identification is GNU 5.4.0 -- The CXX compiler identification is GNU 5.4.0 -- Check for working C compiler: /usr/bin/arm-linux-gnueabihf-gcc …
Sometimes, it may be necessary to install a devroot for an architecture or a version of Apertis different than that coming pre-installed with the SDK image. Since devroots are ospacks specially built each time during the build process, installing them can be done separately as described below. Apertis currently only builds armhf devroots, but this may change in future.
wget https://images.apertis.org/daily/<release>/<timestamp>/<arch>/devroot/ospack_<release>-<arch>-devroot_<timestamp>.tar.gz sudo mkdir -p /opt/directory-to-unpack-into sudo tar -xvf ospack_<release>-<arch>-devroot_<timestamp>.tar.gz -C /opt/directory-to-unpack-into
<release>is the release version, e.g.
<timestamp>is the version of the image, e.g.
<arch>is the architecture, e.g.
/opt/directory-to-unpack-intois the directory into which the devroot will be installed
Similarly, for the release images, replace the