This is the page for the Quality Assurance team. It includes links for QA services, documentation, tests and any information related to the different QA processes in the project.

Apertis issues

Apertis uses the Apertis issues board to keep track of the current known issues with the project, as well as any proposed enhancement from the community.

Community members are encouraged to contribute to it by reporting any issues they may find while working with Apertis, or by suggesting improvements to the project.

Weekly tests reports

The images are tested weekly using both manual and automated tests as documented on the test case site. Reports of the testing performed are available on the test report site.



Tools used for QA tasks and infrastructure.

Submitting issues

Apertis uses apertis-issues as the main point to report issues such as bugs or enhancements. Downstream distributions can use use this repository to track Apertis specific issues. On the other hand, bugs or enhancements in downstream distributions should follow their own workflow, as all the issues in apertis-issues should be public and reproducible in Apertis. The following section highlight some recommendations on reporting issues using the apertis-issues board. Workflow Go to apertis-issues and check if your issue is already listed in the open issues Also check the list of closed issues since the problem might be already solved in a newer release, in which case using the issue as reference will be valuable If your concern is still valid, create a new issue Provide a good title for the issue to summarize the problem Leave type as issue as it is the only supported type Fill out all the fields in the form in order to provide the QA team as much information as possible QA team will triage the newly created issue with a priority and according to it a developer will be assigned QA team will update labels as needed to improve the management of the issue Different views of the issues are available through Gitlab boards Priority board Progress board As part of the process of the QA team, a task in the internal Phabricator is created for management purposes only and a link is appended to the original issue. [Read More]

Test Case Guidelines

The following are guidelines to create and edit test cases. Please follow them unless there is a compelling reason to make an exception. Workflow Each test case should be created at apertis-test-cases from which the QA website is automatically rendered. To develop a new test case the proper workflow should be: Make sure the test case doesn’t exist by checking the test cases list. Make sure nobody is already working on a test case by checking the Phabricator tasks. [Read More]

Robot Framework

Robot Framework is a generic test automation framework for acceptance testing and acceptance test-driven development (ATDD). It has easy-to-use tabular test data syntax and it utilizes the keyword-driven testing approach. Its testing capabilities can be extended by test libraries implemented with either Python or Java, and users can create new higher-level keywords from existing ones using the same syntax that is used for creating test cases. It is open source software released under Apache License 2. [Read More]

Test Data Reporting

Testing is a fundamental part of the project, but it is not so useful unless it goes along with an accurate and convenient model to report the results of such a testing. The QA Test Report is an application that has been developed to save and report the test results for the Apertis images. It supports both automated tests results executed by LAVA and manual tests results submitted by a tester. [Read More]

Test Definitions

The test cases, both manual and automated, are written in the LAVA test definition file format, which stores the instructions to run the automated tests in YAML files. Git is used as the data storage backend for all the test cases. The current Apertis tests can be found in the Apertis Test Cases repository. The test cases are versioned using Git branches to enable functionality change without breaking tests for older releases. [Read More]

Immutable Rootfs Tests

Testing on a immutable rootfs Tests should be self-contained and not require changes to the rootfs: any change to the rootfs causes the tested system to diverge from the one used in production, reducing the value of testing. Changing the rootfs is also impossible or very cumbersome to do with some deployment systems such as dm-verity or OSTree. Other systems may simply not ship package management systems like apt/dpkg due to size constraints, making package dependencies not viable. [Read More]


LQA is both a tool and API for LAVA quality assurance tasks. It stands for LAVA Quality Assurance and its features go from submitting LAVA jobs, collect tests results, and query most of the LAVA metadata. It is a tool developed by Collabora and it is the standard way to submit test jobs for Apertis images. Installation Fetch the latest code from the git repository and install using python pip as it will handle automatically the required dependencies:: [Read More]