The following are guidelines to create and edit test cases. Please follow them unless there is a compelling reason to make an exception.

Storage

The test cases, both manual and automated, are written in the LAVA test definition file format, which stores the instructions to run the automated tests in YAML files.

Git is used as the data storage backend for all the test cases. The current Apertis tests can be found in the Apertis Test Cases repository. The test cases are versioned using Git branches to enable functionality change without breaking tests for older releases.

The format has been extended to add all the required test case data to the tests definition files. A description of these changes can be found in the README.md This approach avoids the issue of test case instructions differing from the executed steps in automated tests, since the test case and the definition file are the same document. The atc utility is provided with the test cases to render them to HTML definitions. Test cases labeled as automated are run in LAVA, those labeled as manual should be run by hand using the steps that are generated in the HTML definition.

Workflow

Each test case should be created at apertis-test-cases from which the QA website is automatically rendered.

To develop a new test case the proper workflow should be:

  • Make sure the test case doesn’t exist by checking the test cases list.
  • Make sure nobody is already working on a test case by checking the Phabricator tasks.
  • Determine the main feature to focus the test upon and set the test case identifier accordingly
  • Determine the release your test is targeting
  • Clone the repository apertis-test-cases
  • Checkout the desired branch and create the desired test
  • Provide a merge request targeting the desired branch
  • Ask a QA team member to review your test

Developers should also make sure the test cases are kept up to date.

Identification

To keep test case identifiers consistent, they should be in the following form:

<component>-<feature>-<sub-feature>-<suffix>

:* The component part is mandatory. It can be a module name or one of the following keywords: sdk, image or system.

:* The feature part is useful to mark the relation with the specification. It should be common between several test cases.

:* The sub-feature part should be concise but precise enough to describe the specific functions tested by that case.

:* The following suffixes can be used in specific scenarios:

::* unit - Unit test suite shipped with the component.

::* performance - Performance oriented test.

::* compatibility - Compatibility test.

Moreover, the following rules should be taken into account while choosing the identifier:

  • All lower case characters.
  • Use single words to describe components and features. If not possible, use dashes to separate words (e.g. hw-accelerated).
  • Be concise yet thorough.
  • Avoid using mutable states in names (e.g., foo-bar-manual, because it may later be automated).
  • Avoid using only the component as identifier (e.g. libbar) or some generic suffix (e.g. foo-all). It usually means the test case should be split.

Here are some good identifier examples:

  • folks-metacontacts-linking (component: folks, feature: metacontacts, sub-feature: linking)
  • gettext-i18n (component: gettext, feature: i18n)
  • sdk-roller-widget-performance (component: sdk, feature: roller-widget, suffix: performance)
  • telepathy-gabble-unit (component: telepathy-gabble, suffix: unit)

Test definition

The test cases are written in the LAVA test definition file format, which stores the instructions to run the tests in YAML files.

A test definition looks like this:

  metadata:
    name: nfsroot-simple-boot
    format: "Apertis Test Definition 1.0"
    image-types:
      nfsroot: [ armhf, arm64, amd64 ]
    image-deployment:
      - NFS
    type: functional
    exec-type: automated
    priority: high
    task-per-release: enable
    maintainer: "Apertis Project"
    description: "This test checks that an image that was flashed using the nfsroot properly booted."

  expected:
    - "The output should show pass."

  run:
    steps:
      - lava-test-case nfsroot-simple-boot --result pass

  parse:
    pattern: 'TEST_RESULT:(?P<result>\w+):(?P<test_case_id>[^:]+):'

The metadata parameter is mandatory and contains information which is used by QA Report Application to handle test cases.

image-types option can be armhf, arm64, amd64 denoting the type of images associated with the test.

image-deployment option can be apt, ostree, nfs, lxc` pointing to the type of deployments on which the test should be executed.

exec-type option can be set to automated or manual. Automated tests are run on image generation in LAVA and results are reported to QA Report Application to consolidate them. In case of manual tests, the QA engineer manually runs the test and reports the results.

priority option can be critical, high, medium or low, which denotes how critical a failure on the test is. This is also the suggested priority that should be used when triaging a task associated with such failure.

task-per-release option value can be enable or disable. If this option is enabled for a given test-case the QA Report App generates different tasks for failures on that same test-case but on different releases, but keeps using the same ticket for different images or architectures for a given release. With the feature disabled for a given test-case the QA Report App keeps the current behavior of generating a single task for failures on that same test-case.

run option is a list of steps to be executed to run the test.

expected option provides a reference to the expected result after running the test.

The test case file format is very extensible and new fields can be added if necessary.

Recommendations

Please follow these rules while using the template:

  • Fill the description field appropriately. It should explain what the test case is testing even if it is fully automated.
    • Example of valid description: “Test that component A correctly supports feature 1 and feature 2 in condition X”.
    • Example of invalid description: “Run a test script for component A”.
  • The description field should also contain a mandatory explanation about how the test is being executed and what exact feature is testing, for example:
    • “This test runs script X in this way, then run Y, and execute steps A-B-C to test feature Z”
    • More examples and explanation on template page.
  • While listing pre-conditions and execution steps, make sure to include every single step needed when starting from a clean image. Don’t make any assumption.
    • List all packages needed that aren’t installed by default on the image.
    • List all necessary resources and how to retrieve them.
    • List all expected environment variables.
    • etc.
  • Follow the recommendations about immutable rootfs tests