One of the key strengths of Apertis is the testing done to the images allowing to catch regressions early in the development cycle. The QA process heavily relies in LAVA to perform automated tests on the reference hardware, which is then reported on QA website.

Running tests on real hardware allows developers and product owners to make sure that products meet their expectations on the target platform. Thanks to these integration tests the combination of hardware, kernel base system and application is stressed in different ways to ensure the quality of the final product.

Apertis currently runs:

  • Automated tests on daily basis for daily builds
  • Automated and manual tests on released images

This section is for the Quality Assurance team. It includes links for QA services, documentation, tests and any information related to the different QA processes in the project.

Apertis issues

Apertis uses the Apertis issues board to keep track of the current known issues with the project, as well as any proposed enhancement from the community.

Community members are encouraged to contribute to it by reporting any issues they may find while working with Apertis, or by suggesting improvements to the project.



Tools used for QA tasks and infrastructure.

Testing gaps - core components

General considerations As described in Apertis test strategy the approach to do gap analysis is to classify the components under different categories and based on the expected levels of testing for each of them provide a report about the gaps. In general, based on the current workflow, most of the component already meet some standard level of testing, and share some common status which is described below. As a general idea, testing should focus in Apertis specific components and components with delta from Debian. [Read More]

Submitting issues

Apertis uses apertis-issues as the main point to report issues such as bugs or enhancements. Downstream distributions can use use this repository to track Apertis specific issues. On the other hand, bugs or enhancements in downstream distributions should follow their own workflow, as all the issues in apertis-issues should be public and reproducible in Apertis. The following section highlight some recommendations on reporting issues using the apertis-issues board. Workflow Go to apertis-issues and check if your issue is already listed in the open issues Also check the list of closed issues since the problem might be already solved in a newer release, in which case using the issue as reference will be valuable If your concern is still valid, create a new issue Provide a good title for the issue to summarize the problem Leave type as issue as it is the only supported type Fill out all the fields in the form in order to provide the QA team as much information as possible QA team will triage the newly created issue with a priority and according to it a developer will be assigned QA team will update labels as needed to improve the management of the issue Different views of the issues are available through Gitlab boards Priority board Progress board As part of the process of the QA team, a task in the internal Phabricator is created for management purposes only and a link is appended to the original issue. [Read More]

Test Case Guidelines

The following are guidelines to create and edit test cases. Please follow them unless there is a compelling reason to make an exception. Storage The test cases, both manual and automated, are written in the LAVA test definition file format, which stores the instructions to run the automated tests in YAML files. Git is used as the data storage backend for all the test cases. The current Apertis tests can be found in the Apertis Test Cases repository. [Read More]

Robot Framework

Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test data syntax and it utilizes the keyword-driven testing approach. Its testing capabilities can be extended by test libraries implemented with either Python or Java, and users can create new higher-level keywords from existing ones using the same syntax that is used for creating test cases. It is open source software released under Apache License 2. [Read More]

Test Data Reporting

Testing is a fundamental part of the project, but it is not so useful unless it goes along with an accurate and convenient model to report the results of such a testing. The QA Test Report is an application that has been developed to save and report the test results for the Apertis images. It supports both automated tests results executed by LAVA and manual tests results submitted by a tester. [Read More]

Immutable Rootfs Tests

Testing on a immutable rootfs Tests should be self-contained and not require changes to the rootfs: any change to the rootfs causes the tested system to diverge from the one used in production, reducing the value of testing. Changing the rootfs is also impossible or very cumbersome to do with some deployment systems such as dm-verity or OSTree. Other systems may simply not ship package management systems like apt/dpkg due to size constraints, making package dependencies not viable. [Read More]


LQA is both a tool and API for LAVA quality assurance tasks. It stands for LAVA Quality Assurance and its features go from submitting LAVA jobs, collect tests results, and query most of the LAVA metadata. It is a tool developed by Collabora and it is the standard way to submit test jobs for Apertis images. Installation Fetch the latest code from the git repository and install using python pip as it will handle automatically the required dependencies:: [Read More]