Analyze automated test failures

If a pipeline run includes failed automated tests, the pipeline run's Failure Analysis tab can help you learn more about the failing tests and what they affect.

Overview

Using the Failure Analysis tab, you can do the following and more:

  • Find SCM commits related to the failing tests.

  • See which application modules have the most failing tests.

  • Identify problematic tests, which have not been consistently successful over time.

To open the Failure Analysis tab:

  1. Open the Pipelines module.

  2. Open a specific pipeline run:

    • To open the last or current run, click the number to the right of the pipeline name.

    • To open a previous run, click the pipeline's ID to open the pipeline. In the Runs tab, click the ID of a specific run.

Back to top

Failure analysis widgets

The top of the Failure Analysis tab displays several widgets that offer insight into your build and product quality. For example:

  • Failed Tests. Displays the total number of tests that failed in this pipeline run. This number is divided into newly failed tests and tests that previously failed as well.

  • Application Modules: 

    Lists the application modules currently associated with automated tests that failed in this pipeline run. The widget shows how many of the tests assigned to the application module failed.

    Note: To use this widget, automated tests must be assigned to application modules. For details on assigning items to application modules, see Work with application modules.

  • Problematic Tests. Shows a breakdown of automated tests that have not been consistently successful, according to the type of problem. For more detail, see Problematic tests.

Back to top

Basic information about failed runs

The bottom of the Failure Analysis tab shows a grid of failed test runs. Here you can see data about the failed test runs, such as:

  • Test name, class, and package.

  • The build number in which the test run failed.

  • The id and name of the build job that ran the test.

  • The number of builds for which the test run has been failing consecutively.

  • Any tags you added to the test or test run in ALM Octane.

Tip: Add columns to the grid to see information that is not displayed by default.

Back to top

Expand your analysis

Here are some additional steps you can take as you investigate failing tests:

  • Select a run in the grid to display additional data about the failure, such as the error message and stack trace.

  • If you are looking at a pipeline's latest run, you can create a defect related to one or more of the failed test runs:

    Select the relevant rows in the grid and click Report Defect.

    The new defect is linked to the runs you selected and contains a link to this pipeline run's Failure Analysis tab.

  • Add additional columns to the grid of failed test runs. For example:

    Add this column to...
    Pipeline links

    navigate to the pipeline and to a particular run of the pipeline.

    Application modules see which application modules are currently associated to the tests that failed.
    Problem see whether the test has a history of unsuccessful runs. For details, see Problematic tests.

    Linked defects

    see defects that are linked to the failed run.

Back to top

Problematic tests

Problematic tests are tests that are not consistently running successfully. They may be repeatedly or randomly failing, continuously being skipped or suddenly failing after having been successful. ALM Octane highlights problematic tests as such behavior indicates a situation you might want to investigate:

  • An automated test run's Problem field indicates the type of problem this test is having. For example, Continuously failing, Oscillating, Continuously skipped, and more.

  • The Problematic tests widget shows a breakdown of test runs that have not been consistently successful, according to the type of problem. Click on the column of a specific problem type to view the relevant test runs.

The widget is available in the dashboard, in a pipeline's overview, and in a pipeline run's Failure Analysis tab.

Note: The widget in the Failure Analysis tab may include fewer tests, as it includes only problematic tests that failed in the selected pipeline run. For example, this widget does not include the Skipped category.

The following table shows the test run result patterns that ALM Octane labels as problematic:

Problem Definition
Continuously failing

The last 8 runs of the test failed.

Oscillating

In the last 8 runs of the test, its Pass/Fail status changed 4 times or more.

In other words, there were at least 4 times in which a failed run was followed by a successful run or vice verca.

Regression

A test that previously passed at least twice is now failing:

Looking at the last 4 or more runs, the series ends with at least 2 passed runs followed by one or more failed runs.

Continuously skipped The test was skipped in the last 8 runs of the pipeline.
Unstable

A test that is failing randomly:

In the last 50 runs of the test, there were at least 5 times where the test passed, failed once, and then passed again.

Note:  

  • If the test run results match more than one problematic pattern, the Problem field contains multiple problem types.

  • Throughout the patterns, the test may have been skipped in some of the pipeline runs, but not in the most recent pipeline run.

Example: P=Passed, F= Failed, S = Skipped.

FSFFFSFFFF: Continuously failing

FFFFFFFFFFS: Not problematic (ends with skipped)

PFPPSPFPP: Oscillating (4 changes)

PFPPFPPSS: Not problematic (ends with skipped)

PPPF: Regression

PPFFFSFFF: Regression

SSSSSSSS: Continuously skipped

SSFSSSSS: Not problematic (not all 8 skipped)

See also: