Table of Contents:
QA Report App is the Apertis tool for reporting and viewing the results of both manual and automated (via LAVA) QA testing.
Primary Configuration File
QA Report App’s configuration is handled via the file
QA Report App needs to be able to generate links to the test cases and images:
app-url should be the URL where QA Report App is running at, and
should point to the web location where images downloads are available at.
Debug logging from the service can be enabled via:
Open ID Connect user login
To activate authentication, the following configuration options must be set:
client-secret are created by the OpenID Connect
It should also provide the
E.g.: For using a Gitlab instance as OpenID Connect identity provider,
an application must be added on gitlab, with the openid scope and the
https://<qa-report-app-server>/openid_callback callback URL.
For Gitlab, the
well-known-url value is
always-require-login is an optional boolean value.
When set to
true, the website will require users to be logged in to access
any page. This is set to
false by default.
In addition to the login options described above, a set of authentication groups can be defined for a more fine grained control on which users/groups are allowed to login and their respective permissions:
Each group can optionally configure a set of extra permissions (defaults to empty).
The supported values for
- submit-reports: users in this group can submit manual test reports
- tag-images: users in this group can tag images
auth-groups is omitted or empty (default), any user with the right credentials
will be able to login, submit reports and tag images.
QA Report App uses shared secrets to authenticate legitimate LAVA callback submissions, which can be set in the config file:
Additionally, they can be set in the environment.
In order to obtain a new token, you need to create a new token in the LAVA UI
Authentication Tokens) for the user submitting the
The token’s name should match the name set in the
property of the LAVA job definition YAML.
Filter displayed releases
display configuration section can be used to alter the minimal version to
be displayed, as well as the number of results displayed for each release on
the main page.
This makes it easier to go through the results.
The section is optional and can be configured as follows:
MIN-VERSION is a string value that specifies the minimal version to be shown.
All results for versions inferior to
MIN-VERSION will not be shown on the
RESULT-COUNT is an integer value that specifies the number of results
to show for each release.
E.g.: At the start of year 2023, with
the displayed results will be:
- for releases: show results for v2024dev0.0, v2023.0 & v2023pre.0 (last 2 release results in the v2023 series) and v2022.3 & v2022.2
- for dailies: show last 7 results for each of v2024, v2023 and v2022 daily builds
- for weeklies: show last 4 results for each of v2024, v2023 and v2022 weekly builds
Defining Supported Platforms and Deployments
QA Report App is designed to take in test case results from a given platform
that the test cases were run on, such as
minimal-armhf-uboot-internal, along with a deployment describing how the OS
was installed onto the platform (e.g.
The support platforms and deployments can be configured via the
of the configuration file:
Defining Variants and Architectures
Image platforms as described contain both variants (different versions of the same distribution) and architectures. For example, given the following platforms:
Each of these starts with a variant (
tiny-lxc) that is
immediately followed by the architecture (
amd64). The supported
variants and architectures should be defined at the top of the
of the config file:
Deployments that can be selected by the user when manually submitting test
results go in the
user-selectable section, with each deployment name mapped
to a human-readable description. For instance, we may have:
In this case,
ostree, is the deployment name, and
OSTree is the
human-readable description that will be shown in the UI.
If, instead, a deployment should not appear for manual test result submission,
but instead only be available for automatically reported LAVA results, the
deployment names should be placed in the
lava-only section, e.g.:
No human-readable name is required, since these are never shown explicitly in the UI.
Image platforms, such as
fixedfunction-armhf-uboot-public, are defined in
groups called “platform groups”. Each group contains:
- A list of platforms defined in this group.
- A list of deployments supported by the given platforms.
- (Optional) A list of release/version ranges that support the given platforms.
The purpose of placing platforms into these groups is to be able to define multiple platforms that support the same deployments same conditions in one place, reducing repition.
Example of some basic platform groups:
Platform Support Ranges
supported-by key defines a list of ranges of releases that support the
platforms in the current group.
The individual ranges inside have two keys, at least one of which must be given:
first: the first / oldest version that supports this platform. If not given, it is assumed that the platform has been supported since the very first release.
lastdefines the last / newest version that supports this platform. If not given, it is assumed that the platform is still supported by the very latest release.
Each of these endpoints of the range in turn contains
to set the release and version of the given endpoint. The version is optional
and, if not given, all versions in the release are counted. For example, this
platform group is valid for all releases from v2021 to v2022 version 20220624:
The end ranges can be defined to be exclusive instead of inclusive, via:
This would normally mean “every release since and including v2021 is
supported”, but setting
inclusive: false changes it to mean “every release
since and not including v2021”.
Multiple support ranges can be given:
Some more examples:
Changing the Test Cases Source
Test cases are, by default, pulled from the Apertis test cases Git
repository. The individual
YAML files describing test cases are then taken from every branch starting with
apertis/, with the text afterwards being treated as the release (e.g. given
apertis/v2020, the release would be
v2020). If you want to change the URL
and the branch prefix (the
URLis a URL pointing to a Git repository containing the test cases.
BRANCH-PREFIXis the prefix
Note that the portion of the branch name after the branch prefix is treated as the release/suite.
Bug Tracker Integration
This sets up integration with Phabricator, for automated issue creation.
PHABRICATOR-TOKEN is a Phabricator Conduit API token, which can be obtained
via the following steps:
- Click your profile picture in Phabricator on the top-right -> Settings.
- Select Conduit API Tokens in the left sidebar.
- Click the Generate Token button.
phabricator-tasks section controls the behavior of tasks that are
automatically created by QA Report App from LAVA failures.
space is the Phabricator space that should have visibility into the created
filter-tag is a Phabricator tag should identify all tasks automatically filed;
new tasks filed will have this tag assigned, and existing ones will be searched
using this tag.
tags defines any additional tags that should be added when
opening a new ticket (but these are not used for any other purpose).
This sets up integration with GitLab issues, for automated issue creation.
GITLAB-TOKEN should be a [GitLab access token]
(use the api scope), and
GITLAB/PROJECT should be the GitLab project where
issues are stored (e.g.
gitlab-issues section controls the behavior of issues that are
automatically created by QA Report App from LAVA failures.
filter-label is a
GitLab label that should identify all issues automatically filed; new issues
filed will have this label assigned, and existing ones will be searched using
labels defines any additional labels that should be added when
opening a new issue (but these are not used for any other purpose).
gitlab-url is additionally used for configuring OAuth logins and
thus may be set even if GitLab issues integration is not set up.
Environment for docker-compose
In addition to the standalone configuration file, a separate file
providing environment variables to docker-compose, can be created in the
qa-report-app directory. These variables are used to configure hosting aspects,
rather than the behavior of the application itself.
The following variables are available to be set, as well as their default values: