All Projects → EnricoMi → Publish Unit Test Result Action

EnricoMi / Publish Unit Test Result Action

Licence: apache-2.0
GitHub Action to publish unit test results on GitHub

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Publish Unit Test Result Action

Cfn nag
Linting tool for CloudFormation templates
Stars: ✭ 808 (+1038.03%)
Mutual labels:  continuous-testing, unit-testing
kraken
Kraken CI is a continuous integration and testing system.
Stars: ✭ 87 (+22.54%)
Mutual labels:  reporting, continuous-testing
Stocker
Stocker is a currency monitoring app. It offers instant currency rates of banks.
Stars: ✭ 38 (-46.48%)
Mutual labels:  unit-testing
React Cool Starter
😎 🐣 A starter boilerplate for a universal web app with the best development experience and a focus on performance and best practices.
Stars: ✭ 1,083 (+1425.35%)
Mutual labels:  unit-testing
Thinreports Php
An implementation of the Thinreports Generator in PHP. It provides easy and simple way for generating a PDF on pure PHP.
Stars: ✭ 51 (-28.17%)
Mutual labels:  reporting
Junit Extensions
JUnit5 extensions library including JUnit5 equivalents of some of the common JUnit4 rules: ExpectedException, TemporaryFolder etc
Stars: ✭ 39 (-45.07%)
Mutual labels:  unit-testing
Mockstar
Demo project on How to be a Mockstar using Mockito and MockWebServer.
Stars: ✭ 53 (-25.35%)
Mutual labels:  unit-testing
Ghpr.nunit
Adapter for NUnit 3 (generate HTML report for NUnit 3)
Stars: ✭ 33 (-53.52%)
Mutual labels:  reporting
C koans
C Koans
Stars: ✭ 65 (-8.45%)
Mutual labels:  unit-testing
Lode
A universal GUI for unit testing
Stars: ✭ 51 (-28.17%)
Mutual labels:  unit-testing
Data Mocks
Library to mock local data requests using Fetch or XHR
Stars: ✭ 55 (-22.54%)
Mutual labels:  unit-testing
Xmlunit.net
XMLUnit.NET 2.x
Stars: ✭ 50 (-29.58%)
Mutual labels:  unit-testing
Aspnetcore Tests Sample
A project to help demonstrate how to do unit, integration and acceptance tests with an web api project using ASP.NET Core and Angular 7 front end.
Stars: ✭ 40 (-43.66%)
Mutual labels:  unit-testing
Criterion
A cross-platform C and C++ unit testing framework for the 21st century
Stars: ✭ 1,073 (+1411.27%)
Mutual labels:  unit-testing
Gowitness
🔍 gowitness - a golang, web screenshot utility using Chrome Headless
Stars: ✭ 996 (+1302.82%)
Mutual labels:  reporting
Qtools
QTools collection of open source tools for embedded systems development on Windows, Linux and MacOS
Stars: ✭ 64 (-9.86%)
Mutual labels:  unit-testing
Persimmon
A unit test framework for F# using computation expressions.
Stars: ✭ 37 (-47.89%)
Mutual labels:  unit-testing
React Todo
ReactJS + CSS Modules + Sass + Blueprint
Stars: ✭ 49 (-30.99%)
Mutual labels:  unit-testing
Awesome Android Kotlin Apps
👓 A curated list of awesome android kotlin apps by open-source contributors.
Stars: ✭ 1,058 (+1390.14%)
Mutual labels:  unit-testing
Simplestubs
*SimpleStubs* is a simple mocking framework that supports Universal Windows Platform (UWP), .NET Core and .NET framework. SimpleStubs is currently developed and maintained by Microsoft BigPark Studios in Vancouver.
Stars: ✭ 66 (-7.04%)
Mutual labels:  unit-testing

GitHub Action to Publish Unit Test Results

CI/CD GitHub release badge GitHub license badge GitHub Workflows badge

This GitHub Action analyses Unit Test result files and publishes the results on GitHub. It supports the JUnit XML file format.

Unit test results are published in the GitHub Actions section of the respective commit:

checks comment example

Note: This action does not fail if unit tests failed. The action that executed the unit tests should fail on test failure.

Each failing test will produce an annotation with failure details: annotations example

Note: Only the first failure of a test is shown. If you want to see all failures, set report_individual_runs: "true".

A comment is posted on the pull request of that commit, if one exists. In presence of failures or errors, the comment links to the respective check page with failure details:

pull request comment example

The checks section of the pull request also lists a short summary (here 1 fail, 1 skipped, 17 pass in 12s), and a link to the GitHub Actions section (here Details):

pull request checks example

The result distinguishes between tests and runs. In some situations, tests run multiple times, e.g. in different environments. Displaying the number of runs allows spotting unexpected changes in the number of runs as well.

The change statistics (e.g. 5 tests ±0) might sometimes hide test removal. Those are highlighted in pull request comments to easily spot unintended test removal:

pull request comment example with test changes

Note: This requires check_run_annotations to be set to all tests, skipped tests.

The symbols have the following meaning:

Symbol Meaning
A successful test or run
A skipped test or run
A failed test or run
An erroneous test or run
The duration of all tests or runs

Using this Action

You can add this action to your GitHub workflow as follows:

- name: Publish Unit Test Results
  uses: EnricoMi/[email protected]
  if: always()
  with:
    files: test-results/**/*.xml

The if: always() clause guarantees that this action always runs, even if earlier steps (e.g., the unit test step) in your workflow fail.

Using pre-build Docker images

You can use a pre-built docker image from GitHub Container Registry (Beta). This way, the action is not build for every run of your workflow, and you are guaranteed to get the exact same action build:

- name: Publish Unit Test Results
  uses: docker://ghcr.io/enricomi/publish-unit-test-result-action:v1
  if: always()
  with:
    github_token: ${{ github.token }}
    files: test-results/**/*.xml

Note: GitHub Container Registry is currently in beta phase. This action may abandon GitHub Container Registry support when GitHub changes its conditions.

Configuration

The action publishes results to the commit that it has been triggered on. Depending on the workflow event this can be different kinds of commits. See GitHub Workflow documentation for which commit the GITHUB_SHA environment variable actually refers to.

Pull request related events refer to the merge commit, which is not your pushed commit and is not part of the commit history shown at GitHub. Therefore, the actual pushed commit SHA is used, provided by the event payload.

If you need the action to use a different commit SHA than those described above, you can set it via the commit option:

with:
  commit: ${{ your-commit-sha }}

The job name in the GitHub Actions section that provides the test results can be configured via the check_name option. It is optional and defaults to "Unit Test Results", as shown in above screenshot.

Each run of the action creates a new comment on the respective pull request with unit test results. The title of the comment can be configured via the comment_title variable. It is optional and defaults to the check_name option.

In the rare situation that your workflow builds and tests the actual commit, rather than the merge commit provided by GitHub via GITHUB_SHA, you can configure the action via pull_request_build. With commit, it assumes that the actual commit is being built, with merge it assumes the merge commit is being built. The default is merge.

The hide_comments option allows hiding earlier comments to reduce the volume of comments. The default is all but latest, which hides all earlier comments of the action. Setting the option to orphaned commits will hide comments for orphaned commits only. These are commits that do no longer belong to the pull request (due to commit history rewrite). Hiding comments can be disabled all together with value off.

To disable comments on pull requests completely, set the option comment_on_pr to false. Pull request comments are enabled by default.

Files can be selected via the files variable, which is optional and defaults to the current working directory. It supports wildcards like *, **, ? and []. The ** wildcard matches directories recursively: ./, ./*/, ./*/*/, etc.

If multiple runs exist for a test, only the first failure is reported, unless report_individual_runs is true.

In the rare situation where a project contains test class duplicates with the same name in different files, you may want to set deduplicate_classes_by_file_name to true.

With check_run_annotations, the check run provides additional information. Use comma to set multiple values:

  • All found tests are displayed with all tests.
  • All skipped tests are listed with skipped tests.

These additional information are only added to the default branch of your repository, e.g. main or master. Use check_run_annotations_branch to enable this for multiple branches (comma separated list) or all branches ("*").

Pull request comments highlight removal of tests or tests that the pull request moves into skip state. Those removed or skipped tests are added as a list, which is limited in length by test_changes_limit, which defaults to 5. Listing these tests can be disabled entirely by setting this limit to 0. This feature requires check_run_annotations to contain all tests in order to detect test addition and removal, and skipped tests to detect new skipped and un-skipped tests, as well as check_run_annotations_branch to contain your default branch.

See this complete list of configuration options for reference:

  with:
    github_token: ${{ secrets.PAT }}
    commit: ${{ your-commit-sha }}
    check_name: Unit Test Results
    comment_title: Unit Test Statistics
    hide_comments: all but latest
    comment_on_pr: true
    pull_request_build: commit
    test_changes_limit: 5
    files: test-results/**/*.xml
    report_individual_runs: true
    deduplicate_classes_by_file_name: false
    check_run_annotations_branch: main, master, branch_one
    check_run_annotations: all tests, skipped tests

Use with matrix strategy

In a scenario where your unit tests run multiple times in different environments (e.g. a strategy matrix), the action should run only once over all test results. For this, put the action into a separate job that depends on all your test environments. Those need to upload the test results as artifacts, which are then all downloaded by your publish job.

name: CI

on: [push]

jobs:
  build-and-test:
    name: Build and Test (Python ${{ matrix.python-version }})
    runs-on: ubuntu-latest

    strategy:
      fail-fast: false
      matrix:
        python-version: [3.6, 3.7, 3.8]

    steps:
      - name: Checkout
        uses: actions/[email protected]

      - name: Setup Python ${{ matrix.python-version }}
        uses: actions/[email protected]
        with:
          python-version: ${{ matrix.python-version }}

      - name: PyTest
        run: python -m pytest test --junit-xml pytest.xml

      - name: Upload Unit Test Results
        if: always()
        uses: actions/[email protected]
        with:
          name: Unit Test Results (Python ${{ matrix.python-version }})
          path: pytest.xml

  publish-test-results:
    name: "Publish Unit Tests Results"
    needs: build-and-test
    runs-on: ubuntu-latest
    # the build-and-test job might be skipped, we don't need to run this job then
    if: success() || failure()

    steps:
      - name: Download Artifacts
        uses: actions/[email protected]
        with:
          path: artifacts

      - name: Publish Unit Test Results
        uses: EnricoMi/[email protected]
        with:
          check_name: Unit Test Results
          files: pytest.xml

Support fork repositories and dependabot branches

Getting unit test results of pull requests created by Dependabot or by contributors from fork repositories requires some additional setup.

  1. Condition the publish-unit-test-result action in your CI workflow to only publish test results when the action runs in your repository's context.
  2. Your CI workflow has to upload unit test result files.
  3. Set up an additional workflow on workflow_run events, which starts on completion of the CI workflow, downloads the unit test result files and runs this action on them.

Add this condition to your publish test results step in your CI workflow:

if: >
  always() && ! startsWith(github.ref, 'refs/heads/dependabot/') && (
    github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name == github.repository
  )

Add the following action step to your CI workflow to upload unit test results as artifacts. Adjust the value of path to fit your setup:

- name: Upload Test Results
  if: always()
  uses: actions/[email protected]
  with:
     name: Unit Test Results
     path: |
        test-results/*.xml

If you run tests in a strategy matrix, make the artifact name unique for each job, e.g.: name: Upload Test Results (${{ matrix.python-version }}).

Add the following workflow that publishes unit test results. It downloads and extracts all artifacts into artifact/ARTIFACT_NAME/, where ARTIFACT_NAME will be Upload Test Results when setup as above, or Upload Test Results (…) when run in a strategy matrix. It then runs the action on files in artifacts/*/. Replace * with the name of your unit test artifacts if * does not work for you. Also adjust the value of workflows (here "CI") to fit your setup:

name: Unit Test Results

on:
   workflow_run:
      workflows: ["CI"]
      types:
         - completed

jobs:
   unit-test-results:
      name: Unit Test Results
      runs-on: ubuntu-latest
      if: >
         github.event.workflow_run.conclusion != 'skipped' && (
           startsWith(github.event.workflow_run.head_branch, 'dependabot/') ||
           github.event.workflow_run.head_repository.full_name != github.repository
         )

      steps:
         - name: Download Artifacts
           uses: actions/[email protected]
           with:
              script: |
                 var fs = require('fs');
                 var path = require('path');
                 var artifacts_path = path.join('${{github.workspace}}', 'artifacts')
                 fs.mkdirSync(artifacts_path, { recursive: true })

                 var artifacts = await github.actions.listWorkflowRunArtifacts({
                    owner: context.repo.owner,
                    repo: context.repo.repo,
                    run_id: ${{ github.event.workflow_run.id }},
                 });

                 for (const artifact of artifacts.data.artifacts) {
                    var download = await github.actions.downloadArtifact({
                       owner: context.repo.owner,
                       repo: context.repo.repo,
                       artifact_id: artifact.id,
                       archive_format: 'zip',
                    });
                    var artifact_path = path.join(artifacts_path, `${artifact.name}.zip`)
                    fs.writeFileSync(artifact_path, Buffer.from(download.data));
                    console.log(`Downloaded ${artifact_path}`);
                 }
         - name: Extract Artifacts
           run: |
              for file in artifacts/*.zip
              do
                if [ -f "$file" ]
                then
                  dir="${file/%.zip/}"
                  mkdir -p "$dir"
                  unzip -d "$dir" "$file"
                fi
              done

         - name: Publish Unit Test Results
           uses: EnricoMi/[email protected]
           with:
              commit: ${{ github.event.workflow_run.head_sha }}
              files: "artifacts/*/**/*.xml"

Note: Running this action on pull_request_target events is dangerous if combined with code checkout and code execution.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].