Table of Contents:

While Apertis is focussed on providing a full infotainment system, including HMI, there are potential use-cases for a much smaller and scaled down variation of Apertis. One such use-case is to use Apertis purely to bridge between the internal car systems and the wider internet, in such a situation Apertis would only be used to run Agents which connect to web and other online services and relay information received from those systems to the other in-car systems.

Especially for very specific use-cases like the one mentioned resource usage (flash, cpu, ram) needs to be at an absolute minimum.

Proof of concept goals

The goal of the fixedfunction Apertis image is to show that it is possible to use Apertis in such situation. To validate that an initial set of resource targets was set out and initial image create to see if we can match those.

At this time the goal is were set out as:

  • 350 megabytes of flash usage
  • 64 megabytes of RAM
  • Around 100 Mhz CPU bandwidth available (on a virtualized ARM Cortex A9 CPU)

For Flash usage it’s important to note that the intention is that the scaled down setup supports full system rollback. To cope with worst case of rollback, where the backup system and current system don’t share any data, it is required that one instance of the system fits in less then 50% of the available flash space. In this particular case that means one instance of the system should not use more then 175 megabyte of flash (with some space to spare for runtime data).

Note that while system rollback is supported, factory reset (iotw rolling the system fully back to the state it was when installed at the factory) is not supported. In a virtualized setup, this can typically be handled in a much more robust way by the host system rather then any particular virtual machine instance.

In the case of RAM usage, the rough goal would be to have about 50% (around 32 megabyte) available for system usage (kernel system daemons) and the other 50% for application usage (also around 32 megabyte).

And finally for CPU bandwidth the goal is to be usable with a static allocation of 100Mhz, however the idle usage is intended to be minimal. One has to note however that when it comes to virtualisation it is far easier to share CPU based on demands of the various virtual domains then it is to share e.g. RAM or Flash. As such it’s recommended that even if the Apertis domain is given a static “guaranteed” allocation of 100mhz, the hypervisor should allocate CPU resources based on complete system demand rather then purely statically to allow for best overall system performance. For example there is little point in the Apertis domain only receiving 100mhz bandwidth if the system CPU is otherwise idle, in such a case the Apertis domain should be allowed to use more CPU such that it can go back to idle faster.

Approach

There is little point in doing a demonstration image unless it’s representative of the actual use-case and can be repeatebly build (rather then being a one-off).

To ensure it is fully repeatable we set out to build the image using the normal Apertis processes and integrate it into the normal nightly builds.

To ensure it is representative on top of the standard Apertis core (which provides at least networking and update functionality)a set of core Apertis services were installed (Ribchester, Newport, Chalgrove, Prestwood and the update daemon) together with their dependencies. As well as a package to represent an Agent, in this case we picked the combination of telepathy-gabble and telepathy-mission-control-5. Telepathy-gabble is a Connection Manager for the Telepathy framework for XMPP, which means it’s a headless D-Bus service which connects to the XMPP network (using TLS/SSL) and can for some functionality do http(s) requests. This is similar to how we expect an agent in the scaled down setup to work and uses the same libraries and frameworks as such an agent would be expected to use.

Flash footprint reduction

The approach described above gives us a set of packages to add on top of the Apertis core. With that starting point, working towards a scaled down image was done in a few steps.

Step 0: Only install what is really required

Debian package have a few different levels of dependencies, the strongest is “Depends” which annotates which other package are absolutely required for a package. The level below that is “Recommends”, which is the list of packages a given package has a strong but not absolute dependency on. For example some functionality may not be available if the Recommends aren’t installed. Because of that, it is standard for recommends to be installed. However by turning this off we can get more precise control about what is installed and thus can lower the required flash space usage, ofcourse with the trade-of being that more care needs to be taken to ensure all package needed for the required functionality are available.

As an example of this; Connman (network management daemon) Recommends wpasupplicant, without wpasupplicant being installed connman can function correctly however it cannot support WiFi Networks using WPA or WPA2. As such for full WiFi support, wpasupplicant is explicitly installed in our images.

In addition to changing the policy around the installation Recommends, there are also some packages which aren’t strictly required that are installed as part of the image building/bootstrapping process. An image builder “strip” hook was added to remove those packages after the image build.

Step 1: Removal of unneeded data files

Debian packages tend to come with batteries included, e.g. localisation for various languages, documentation (Readme files, man pages, info pages). While very useful on a most systems, on an headless embedded system those serve no purpose. Another big set of common data files for normal systems include timezone information, which isn’t necessarily relevant for a headless embedded system as it can simply use UTC.

To strip those files from the system two approaches were used, for the image builder itself the strip hook removed these files after building the image. Additionally dpkg is configured to exclude these areas when it installs new packages, this means that on updates these types of files will be ignored and not installed into the system.

Step 2: Further analysis to determine further possibilities to reduce the footprint.

After the earlier steps where done, the image footprint was still too big to meet the requirements. Analysing the flash usage per package revealed a 2 big candidates, the kernel and udev.

First for the kernel, there are quite a few kernel modules in the standard Apertis images which provide drivers for all kinds of hardware. This is because Apartis kernel is a quite generic build with support for a wide range of hardware to ease development. However for a scaled down setup that hardware support isn’t required, instead a far smaller kernel image can be used focussed on just supporting the required hardware. For the proof of concept images, the modules were simply removed in the same way as unwanted data files. With the current Apertis kernel the space saving of doing this is around 50 megabytes.

Second item is udev, which ships with about 12 megabytes of datafiles. Upon closer inspection most of these are for the udev hardware database, which contains metadata about various devices. As with the kernel modules this is very useful for a generic system, but not required for a scaled down setup. Like with kernel modules these files were added to the set of unwanted data files.

As well as looking at the data usage per package, the list of packages that got installed was also reviewed. This revealed that even the scaled down image contained a full Python installation, even though python isn’t expected to be used at runtime. Further analysis revealed that the reason python was installed was due to the AppArmor package containing a small command line utility to query AppArmor status, as this is not expected to be used in production the Python dependency from AppArmor could be reduced to a Recommends. As a result of this the fixedfunction images no longer have Python, saving over 32 megabytes of space on the system.

Flash footprint reduction results

Combining the 3 steps results in an image builds of around 170 megabytes in size, however this includes both the per-instance data as the global static data (partition table, blocks used for bootflags, bootloader etc). The root filesystem itself for one instance is around 140 megabytes in size, well within our goal.

On top of that a basic test was done with BTRFS compression to determine the impact of that. Currently BTRFS can compress files using two different algorithms, LZO (lower cpu usage, but lower compression ratio) and zlib (higher cpu usage, but bigger compression ratio). When using LZO compression on btrfs the space usage goes down from around 140 megabytes to around 105 megabytes (25% less flash usage), using zlib lowers the usage even further to around 85 megabytes (40% less flash usage). Even though filesystem compression has both a CPU and Memory impact (compressed data blocks need to be loaded into memory before they can be decompresed), these results do indicate that it is worthwhile avenue for investigation in the future as there is a bit reduction in flash footprint.

Future work for footprint reduction and conclusions

Analysing the current image further it is clear that the footprint can be reduced even further, the work thusfar has focussed on the biggest wins with the smallest impact on the overall Apertis system (allowing fixedfunction images to be build with a small extra effort next to the normal set of images).

There is clearly room for further pruning the set of package which get installed in the system by avoiding dependencies unless strictly necessary, however in most cases that will require changes to the package build or the code. For example Cantebury depends on the pulseaudio libraries which shouldn’t be required on a system with no audio outputs.

However based on the analysis thusfar the area with the biggest potential gain is the update system. Apertis uses the standard Debian infrastructure for minor updates, which is a very flexibility and mature system. However flexibility in this case comes at a cost as full-featured package systems like the one Debian has are really quite complex. Apart from the programs required for the updating themselves a lot of meta-data is required on the system itself for determining both the current state of the system, for the available packages as the packaging tools need to be able to calculate the exact dependency tree and determine what can be upgrade etc. Even on our fixedfunction system the combination of the updating tools and their data files & caches uses about 17 megabytes of space, more then 10 percent of the system footprint!

The use of a full-blown package system adds another complexity, it means there is a very direct connection between the package being build and the resulting system with no room inbetween for fine-tuning. Specifically the usage of dpkg exclude rules allows for avoiding certain files to be installed onto the system (e.g. documentation, localisation files) to lower the system footprint without requiring all the packages to be changed (which would not make them suitable for other types of systems). Which is essentially what made it possible to build the fixedfunction images with a small amount of effort based on the same package set used for other image types. However this means that the downloaded packages for updates will still have these files included which means bigger downloads and a bigger amount of temporary storage is required then is strictly needed.

A successful approach to avoid this issue on various other platforms has been to de-couple the system update strategy completely from the method used to put the system together. This lowers the update complexity for the target system massively as it only has to update to the precise state indicated by the update servers rather then calculating and doing the full update itself, while at the same time allowing for more flexibility and control over how the image is put on the image builder side.

Runtime system tests

With a fixedfunction image available some basic runtime tests were done on an i.MX6 (Quad core) SabreLite. To emulate a scaled-down system a modified device-tree was used to limit the total available system memory to 64 Megabytes and limit the available CPU to one core clocked at the lowest possible frequency (396 mhz). No major issues were found running the system in this setup.

With respect to memory usage after boot around 20 Megabytes of memory was free for Applications after boot, which is lower then the stated goal. However this is with a minimal user session running.

When it comes to CPU usage the system was still performant and responsive with no issues seen in basic testing. Although testing could not be done with the 100Mhz target CPU bandwidth as the i.MX6 cannot be clocked that low, given he performance at 396mhz no big problem are expected as long as that available CPU speed is fast enough for the intended agents.

One issue that came up during the run-time test is that by default Apertis uses non-persistent logging, as such the systemd journal is stored on the /run partition which is an in-memory filesystem for run-time data. The effect of this is that all logs that do lower the total available system memory.

Overall Conclusion

Overall the proof of concept images have successfully shown that the targets that were set out could be hit, making Apertis viable for fixedfunction images of these size. But as always things can be further improved, to re-highlight the various topic mention in this overview:

  • Research/Analyse/improve the update strategy used for our systems.
  • Define a strategy for factory reset in a virtualized environment.
  • Further analyse the installed packages and remove those not strictly needed.
  • Benchmark the performance impact of btrfs filesystem compression.
  • Define a logging strategy suitable for a scaled down setup.