Apertis is primarily built from Debian source packages utilising optimised, automated workflows based on the processes used by Debian. The Debian sources were selected due to their high quality and modularity, them having no reliance on a single vendor and the large number of components already packaged.

Apertis utilises the tools provided by the Debian community, combining these with other tools, such as GitLab and the Open Build Server (OBS) to create a more automated, optimised workflow.

Infrastructure Overview

The source code and packaging metadata for all Apertis packages are expected to be stored in the Apertis GitLab instance, with the shared components (i.e. those that are not project specific) being stored under the pkg group. Where common interests in a component exist, these can be added into the pkg group. These components are further split into the following categories:

  • target: packages intended for use in product images
  • development: additional packages needed to build target packages and development tools
  • hmi: packages for the current reference user interface (or Human Machine Interface, often abbreviated to HMI) which can be use on top of target
  • sdk: packages to build the SDK virtual machine, which is recommended for development

The separation of components into these categories allows the Apertis project to maintain different licensing expectations for the various component use cases.

It is also possible to create additional groups in GitLab for project specific software packages or project specific modifications to an existing component that isn’t suitable for more general inclusion.

Components can be added to Apertis using the contributions process.

The packaging workflow guide provides more information on the practical steps required to achieve this and the component layout guide provides information regarding the layout of a component repository.

Development of components is carried out via feature branches, which are reviewed prior to inclusion in a release branch.

Branch testing and review

Automation is implemented via a GitLab CI/CD pipeline that performs sanity check builds of the components on feature branches as well as release branches. When used with release branches, the pipeline takes care of uploading the updated source of changes passing the checks to the OBS instance (provided by Collabora) which takes care of automatically building the component sources for all the configured architectures (x86_64, armv7l, aarch64) and publishing the resulting binary packages in signed APT repositories. Each package is built in a closed, well-defined environment where OBS automatically installs all the build tools and resolves dependencies from a clean state using the packages it has built already: if any dependency is missing, the build fails. Any build failures are reported back to the relevant GitLab pipeline and to the appropriate developers.

Binary package generation from source change

While the automated workflows described here provide a robust solution for ensuring package quality and modularity, these workflows clearly do not meet the needs of a developer actively developing or debugging a code base. To meet this need, Apertis provides tools to ease development and testing of applications on top of Apertis as well as guidance on adding and updating components.

Most of the package sources get automatically updated from the latest Debian Stable release, with some packages having been manually picked from a later Debian release or straight from the project upstream when more up-to-date versions have been deemed beneficial (notably Apertis includes the latest Linux LTS kernel available when an Apertis release is made). This allows Apertis to share bug fixes and security fixes with the efforts done by the wider Debian community.

After OBS has finished building a package, the results get published in a package repository. The open-source packages from Apertis can be found in the public Apertis package repositories.

The packages in these repositories are then used to build images suitable for deployment onto a variety of targets (such as reference boards or product specific hardware, virtual machines and container images). This process is automated via a CI/CD pipeline that runs nightly.

Image generation from binary packages

Generating images does not involve rebuilding all the packages from source and thus the process is fast and flexible. The whole pipeline is controlled through YAML-based Debos recipes (which are stored in GitLab) that:

  • configure partitions and bootloaders;
  • determine which packages gets installed;
  • declare which overlays are to be applied; and
  • run arbitrary customization shell scripts over the rootfs in a QEMU-based virtualized environment.

This process is usually automatically run by a GitLab CI/CD pipeline, but during development can be run on developers machines as well, fetching packages from the same OBS binary repositories.

The overall strategy for building these deployments is to break it down into various stages, with each stage represented by a separate Debos recipe.

Multi-stage image generation

The process usually starts with the creation of early common stages (e.g. a common root filesystem), with later steps performing modifications to these, further adapting them for specific hardware (for example, adding hardware-specific packages for a particular SoC or platform, like a bootloader, kernel, codecs or GL stack). Further processing can be carried out, modifying the image to cater for different deployment methods, such as providing images which can be updated with OSTree or generating OSTree updates.

Multi-stage OSTree image generation

The split allows just one SoC-, platform- or even board-specific recipe to be combined with more generic recipes. This enables a single generic recipe to fulfill the same use case across multiple platforms when combined with different platform specific recipes. It also enables different generic recipes for different use cases to be combined with the same platform-specific recipe, thus enabling a platform to be used for different use cases.

For instance, the hmi and fixedfunction recipes for arm64 could be combined with the U-Boot and Raspberry Pi recipes to generate four possible combinations of flashable images, targeting either the Renesas R-Car or Raspberry Pi platforms.

A GitLab CI/CD pipeline periodically schedules a batch of tests against the latest images on the LAVA instance hosted by Collabora. LAVA takes care of deploying the freshly generated images on actual target devices running in the Collabora device farm, and of controlling them over serial connections to run the defined test cases and gather the results. These results are published on the Apertis Test Report Site.

Summary

  • Sources are stored on the GitLab code hosting service with Debian-compatible packaging instructions.
  • GitLab is used for code review, with every branch automatically build tested to provide quick feedback to the developer
  • A GitLab CI/CD pipeline is used to push new releases on release branches to OBS.
  • OBS builds source packages and generates binary packages in controlled environments.
  • Every night GitLab CI/CD pipelines generate images from the binary packages, built and added to repositories by OBS.
  • On success, the pipeline triggers on-device tests using LAVA to check the produced images