Table of Contents:

QA Report App is the Apertis tool for reporting and viewing the results of both manual and automated (via LAVA) QA testing.

Primary Configuration File

QA Report App’s configuration is handled via the file secrets/config.yaml.

General Options

QA Report App needs to be able to generate links to the test cases and images:

1
2
3
test-cases-url: https://qa.apertis.org
bridge-url: https://qa.apertis.org
image-root: https://images.apertis.org

test-cases-url and bridge-url should both be the same value: the URL where QA Report App is running at. (There are two separate options for historical reasons; this will be cleaned up in the future.) image-root should point to the web location where images downloads are available at.

Debug Logging

Debug logging from the service can be enabled via:

1
debug: true

Defining Supported Platforms and Deployments

QA Report App is designed to take in test case results from a given platform that the test cases were run on, such as fixedfunction-armhf-uboot-public or minimal-armhf-uboot-internal, along with a deployment describing how the OS was installed onto the platform (e.g. apt, ostree, lxc, nfs).

The support platforms and deployments can be configured via the images section of the configuration file:

1
2
3
4
5
6
7
8
9
images:
  architectures:
    ...
  variants:
    ...
  deployments:
    ...
  platform-groups:
    ...

Defining Variants and Architectures

Image platforms as described contain both variants (different versions of the same distribution) and architectures. For example, given the following platforms:

  • fixedfunction-armhf-uboot-public
  • tiny-lxc-amd64-amd64-uefi-internal

Each of these starts with a variant (fixedfunction, tiny-lxc) that is immediately followed by the architecture (armhf, amd64). The supported variants and architectures should be defined at the top of the images section of the config file:

1
2
3
4
5
6
7
images:
  architectures:
    - amd64
    - arm64
  variants:
    - fixedfunction
    - tiny-lxc

Defining Deployments

1
2
3
4
5
6
7
images:
  deployments:
    user-selectable:
      NAME-1: DESCRIPTION 1
    lava-only:
      - NAME-2
      - NAME-3

Deployments that can be selected by the user when manually submitting test results go in the user-selectable section, with each deployment name mapped to a human-readable description. For instance, we may have:

1
2
user-selectable:
  ostree: OSTree

In this case, ostree, is the deployment name, and OSTree is the human-readable description that will be shown in the UI.

If, instead, a deployment should not appear for manual test result submission, but instead only be available for automatically reported LAVA results, the deployment names should be placed in the lava-only section, e.g.:

1
2
3
lava-only:
  - lxc
  - nfs

No human-readable name is required, since these are never shown explicitly in the UI.

Defining Platforms

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
images:
  platform-groups:
    - platforms:
        PLATFORM-1: DESCRIPTION 1
        PLATFORM-2: DESCRIPTION 2
      deployments: [DEPLOYMENT-1, DEPLOYMENT-2]
      supported-by:
        - first:
            release: FIRST-RELEASE
            version: FIRST-VERSION
          last:
            release: LAST-RELEASE
            version: LAST-VERSION
    - ...

Image platforms, such as fixedfunction-armhf-uboot-public, are defined in groups called “platform groups”. Each group contains:

  • A list of platforms defined in this group.
  • A list of deployments supported by the given platforms.
  • (Optional) A list of release/version ranges that support the given platforms.

The purpose of placing platforms into these groups is to be able to define multiple platforms that support the same deployments same conditions in one place, reducing repition.

Example of some basic platform groups:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
platform-groups:
  - platforms:
      tiny-lxc-armhf-uboot-public: Tiny LXC ARM
      tiny-lxc-arm64-uboot-public: Tiny LXC ARM64
      tiny-lxc-amd64-uefi-public: Tiny LXC AMD64
    deployments: [lxc]

  - platforms:
      nfsroot-armhf-uboot-public: Nfsroot ARM
      nfsroot-arm64-uboot-public: Nfsroot ARM64
      nfsroot-amd64-uefi-public: Nfsroot AMD64
    deployments: [nfs]

Platform Support Ranges

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
supported-by:
  - first:
      release: FIRST-RELEASE
      version: FIRST-VERSION
    last:
      release: LAST-RELEASE
      version: LAST-VERSION
  - first:
      release: FIRST-RELEASE
      version: FIRST-VERSION
    last:
      release: LAST-RELEASE
      version: LAST-VERSION

The supported-by key defines a list of ranges of releases that support the platforms in the current group.

The individual ranges inside have two keys, at least one of which must be given:

  • first: the first / oldest version that supports this platform. If not given, it is assumed that the platform has been supported since the very first release.
  • last defines the last / newest version that supports this platform. If not given, it is assumed that the platform is still supported by the very latest release.

Each of these endpoints of the range in turn contains release and version, to set the release and version of the given endpoint. The version is optional and, if not given, all versions in the release are counted. For example, this platform group is valid for all releases from v2021 to v2022 version 20220624:

1
2
3
4
5
6
supported-by:
  - first:
      release: v2021
    last:
      release: v2022
      version: '20220624'

The end ranges can be defined to be exclusive instead of inclusive, via:

1
2
3
4
supported-by:
  - first:
      release: v2021
      inclusive: false

This would normally mean “every release since and including v2021 is supported”, but setting inclusive: false changes it to mean “every release since and not including v2021”.

Multiple support ranges can be given:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
supported-by:
  # Supported from v2019 to v2021 and from v2022dev0 to v2022dev2.
  - first:
      release: v2019
    last:
      release: v2021
  - first:
      release: v2022dev0
    last:
      release: v2022dev2

Some more examples:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Supported from every release since, and including, v2022dev3, 20210910:
supported-by:
  - first:
      release: v2022dev3
      version: '20210910'

# Supported from every version *after* v2021, 20210910 up until, and including,
# v2022.
supported-by:
  - first:
      release: v2021
      version: '20210910'
      inclusive: false
    last:
      release: v2022

Changing the Test Cases Source

Test cases are, by default, pulled from the Apertis test cases Git repository. The individual YAML files describing test cases are then taken from every branch starting with apertis/, with the text afterwards being treated as the release (e.g. given apertis/v2020, the release would be v2020). If you want to change the URL and the branch prefix (the apertis/), set test-cases-repository:

1
2
3
test-cases-repository:
  url: THE-URL
  branch-prefix: THE-BRANCH-PREFIX

where:

  • URL is a URL pointing to a Git repository containing the test cases.
  • BRANCH-PREFIX is the prefix

Note that the portion of the branch name after the branch prefix is treated as the release/suite.

Bug Tracker Integration

Phabricator

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
bug-tracking: phabricator
phabricator-url: https://PHRABRICATOR-URL/
arcrc: /PATH/TO/ARCRC
phabricator-tasks:
  space: S2
  filter_tag: PRIMARY-TAG
  tags:
    - ADDITIONAL-TAG-1
    - ADDITIONAL-TAG-2
    - ...

This sets up integration with Phabricator, for automated issue creation. /PATH/TO/ARCRC should be the path to an Arcanist global configuration file. One of these can be generated at ~/.arcrc by running:

arc set-config default https://PHABRICATOR-URL/
arc install-certificate

The phabricator-tasks section controls the behavior of tasks that are automatically created by QA Report App from LAVA failures.

space is the Phabricator space that should have visibility into the created tickets.

filter_tag is a Phabricator tag should identify all tasks automatically filed; new tasks filed will have this tag assigned, and existing ones will be searched using this tag. tags defines any additional tags that should be added when opening a new ticket (but these are not used for any other purpose).

GitLab

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
bug-tracking: gitlab
gitlab-url: https://GITLAB-URL/
gitlab-config: /PATH/TO/GITLAB/CONFIG
gitlab-project: GITLAB/PROJECT
gitlab-issues:
  filter_label: PRIMARY-LABEL
  labels:
    - ADDITIONAL-LABEL-1
    - ADDITIONAL-LABEL-2
    - ...

This sets up integration with GitLab issues, for automated issue creation. /PATH/TO/GITLAB/CONFIG should be the path to a python-gitlab configuration file, and GITLAB/PROJECT should be the GitLab project where issues are stored (e.g. infrastructure/apertis-issues).

The gitlab-issues section controls the behavior of issues that are automatically created by QA Report App from LAVA failures. filter_label is a GitLab label that should identify all issues automatically filed; new issues filed will have this label assigned, and existing ones will be searched using this label. labels defines any additional labels that should be added when opening a new issue (but these are not used for any other purpose).

Note that gitlab-url is additionally used for configuring OAuth logins and thus may be set even if GitLab issues integration is not set up.

Environment for docker-compose

In addition to the standalone configuration file, a separate file .env, providing environment variables to docker-compose, can be created in the qa-report-app directory. These variables are used to configure hosting aspects, rather than the behavior of the application itself.

The following variables are available to be set, as well as their default values:

# The port the application will be exposed on.
QA_REPORT_APP_PORT=28080
# The user ID and group ID the application and database will run as. In
# particular, this results in the database files also being owned by this user
# and group.
RUN_USER=1000:1000

LAVA Authentication

QA Report App uses shared secrets to authenticate legitimate LAVA callback submissions. In order to set up a new shared secret, you need to:

  1. Create a new token in the LAVA UI (APIAuthentication Tokens) for the user submitting the notify callback, using the name set in the notifycallbacktoken property of the LAVA job definition YAML.
  2. Place the token in the ./secrets/token file in QA Report App’s working directory.