All Projects → fruch → pytest-elk-reporter

fruch / pytest-elk-reporter

Licence: MIT license
A plugin to send pytest test results to ELK stack

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to pytest-elk-reporter

pytest-djangoapp
Nice pytest plugin to help you with Django pluggable application testing.
Stars: ✭ 35 (+105.88%)
Mutual labels:  pytest, pytest-plugin
pytest-neo
Matrix has you...
Stars: ✭ 44 (+158.82%)
Mutual labels:  pytest, pytest-plugin
pytest-reportlog
Replacement for the --resultlog option, focused in simplicity and extensibility
Stars: ✭ 36 (+111.76%)
Mutual labels:  pytest, pytest-plugin
pytest-pycodestyle
pytest plugin to run pycodestyle
Stars: ✭ 15 (-11.76%)
Mutual labels:  pytest, pytest-plugin
pytest-csv
CSV reporter for pytest.
Stars: ✭ 16 (-5.88%)
Mutual labels:  pytest, pytest-plugin
pytest-snapshot
A plugin for snapshot testing with pytest.
Stars: ✭ 68 (+300%)
Mutual labels:  pytest, pytest-plugin
Seleniumbase
A Python framework that inspires developers to become better test automation engineers. 🧠💡
Stars: ✭ 2,520 (+14723.53%)
Mutual labels:  pytest, pytest-plugin
pytest-eth
PyTest plugin for testing smart contracts for Ethereum blockchain.
Stars: ✭ 23 (+35.29%)
Mutual labels:  pytest, pytest-plugin
pytest-subprocess
Pytest plugin to fake subprocess.
Stars: ✭ 83 (+388.24%)
Mutual labels:  pytest, pytest-plugin
pytest-datafiles
pytest plugin to create a tmpdir containing a preconfigured set of files and/or directories.
Stars: ✭ 75 (+341.18%)
Mutual labels:  pytest, pytest-plugin
pytest-docker-tools
Opionated helpers for creating py.test fixtures for Docker integration and smoke testing environments
Stars: ✭ 61 (+258.82%)
Mutual labels:  pytest, pytest-plugin
pytest-dependency
Manage dependencies of tests
Stars: ✭ 113 (+564.71%)
Mutual labels:  pytest, pytest-plugin
pytest-localstack
Pytest plugin for local AWS integration tests
Stars: ✭ 66 (+288.24%)
Mutual labels:  pytest, pytest-plugin
pytest-arraydiff
pytest plugin to facilitate comparison of results to a pre-defined reference
Stars: ✭ 12 (-29.41%)
Mutual labels:  pytest, pytest-plugin
pytest-notebook
A pytest plugin for regression testing and regenerating Jupyter Notebooks
Stars: ✭ 35 (+105.88%)
Mutual labels:  pytest, pytest-plugin
Pudb
Full-screen console debugger for Python
Stars: ✭ 2,267 (+13235.29%)
Mutual labels:  pytest, pytest-plugin
pytest-snail
Plugin for adding a marker to slow running tests. 🐌
Stars: ✭ 15 (-11.76%)
Mutual labels:  pytest, pytest-plugin
pytest-mock-server
Mock server plugin for pytest
Stars: ✭ 19 (+11.76%)
Mutual labels:  pytest, pytest-plugin
overhave
Web-framework for BDD: scalable, configurable, easy to use, based on Flask Admin and Pydantic.
Stars: ✭ 61 (+258.82%)
Mutual labels:  pytest, pytest-plugin
pytest-involve
A pytest plugin to run tests pertaining to a specific file or changeset
Stars: ✭ 28 (+64.71%)
Mutual labels:  pytest, pytest-plugin

pytest-elk-reporter

PyPI version Python versions .github/workflows/tests.yml Libraries.io dependency status for GitHub repo Using Black Codecov Reports

A plugin to send pytest test results to ELK stack, with extra context data

Features

  • Report each test result into Elasticsearch as they finish
  • Automatically append contextual data to each test:
    • git information such as branch or last commit and more
    • all of CI env variables
      • Jenkins
      • Travis
      • Circle CI
      • Github Actions
    • username if available
  • Report a test summary to Elastic for each session with all the context data
  • Append any user data into the context sent to Elastic

Requirements

Installation

You can install "pytest-elk-reporter" via pip from PyPI

pip install pytest-elk-reporter

Elasticsearch configuration

We need this auto_create_index setting enabled for the indexes that are going to be used, since we don't have code to create the indexes, this is the default

curl -X PUT "localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
    "persistent": {
        "action.auto_create_index": "true"
    }
}
'

For more info on this Elasticsearch feature check their index documention

Usage

Run and configure from command line

pytest --es-address 127.0.0.1:9200
# or if you need user/password to authenticate
pytest --es-address my-elk-server.io:9200 --es-username fruch --es-password 'passwordsarenicetohave'

Configure from code (ideally in conftest.py)

from pytest_elk_reporter import ElkReporter

def pytest_plugin_registered(plugin, manager):
    if isinstance(plugin, ElkReporter):
      # TODO: get credentials in more secure fashion programmatically, maybe AWS secrets or the likes
      # or put them in plain-text in the code... what can ever go wrong...
      plugin.es_address = "my-elk-server.io:9200"
      plugin.es_user = 'fruch'
      plugin.es_password = 'passwordsarenicetohave'
      plugin.es_index_name = 'test_data'

Configure from pytest ini file

# put this in pytest.ini / tox.ini / setup.cfg
[pytest]
es_address = my-elk-server.io:9200
es_user = fruch
es_password = passwordsarenicetohave
es_index_name = test_data

see pytest docs for more about how to configure pytest using .ini files

Collect context data for the whole session

In this example, I'll be able to build a dashboard for each version:

import pytest

@pytest.fixture(scope="session", autouse=True)
def report_formal_version_to_elk(request):
    """
    Append my own data specific, for example which of the code under test is used
    """
    # TODO: programmatically set to the version of the code under test...
    my_data = {"formal_version": "1.0.0-rc2" }

    elk = request.config.pluginmanager.get_plugin("elk-reporter-runtime")
    elk.session_data.update(**my_data)

Collect data for specific tests

import requests

def test_my_service_and_collect_timings(request, elk_reporter):
    response = requests.get("http://my-server.io/api/do_something")
    assert response.status_code == 200

    elk_reporter.append_test_data(request, {"do_something_response_time": response.elapsed.total_seconds() })
    # now, a dashboard showing response time by version should be quite easy
    # and yeah, it's not exactly a real usable metric, but it's just one example...

Or via the record_property built-in fixture (that is normally used to collect data into junit.xml reports):

import requests

def test_my_service_and_collect_timings(record_property):
    response = requests.get("http://my-server.io/api/do_something")
    assert response.status_code == 200

    record_property("do_something_response_time", response.elapsed.total_seconds())

Split tests based on their duration histories

One cool thing that can be done now that you have a history of the tests, is to split the tests based on their actual runtime when passing. For long-running integration tests, this is priceless.

In this example, we're going to split the run into a maximum of 4 min slices. Any test that doesn't have history information is assumed to be 60 sec long.

# pytest --collect-only --es-splice --es-max-splice-time=4 --es-default-test-time=60
...

0: 0:04:00 - 3 - ['test_history_slices.py::test_should_pass_1', 'test_history_slices.py::test_should_pass_2', 'test_history_slices.py::test_should_pass_3']
1: 0:04:00 - 2 - ['test_history_slices.py::test_with_history_data', 'test_history_slices.py::test_that_failed']

...

# cat include000.txt
test_history_slices.py::test_should_pass_1
test_history_slices.py::test_should_pass_2
test_history_slices.py::test_should_pass_3

# cat include000.txt
test_history_slices.py::test_with_history_data
test_history_slices.py::test_that_failed

### now we can run each slice on its own machine
### on machine1
# pytest $(cat include000.txt)

### on machine2
# pytest $(cat include001.txt)

Contributing

Contributions are very welcome. Tests can be run with tox. Please ensure the coverage at least stays the same before you submit a pull request.

License

Distributed under the terms of the MIT license, "pytest-elk-reporter" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.

Thanks

This pytest plugin was generated with Cookiecutter along with @hackebrot's cookiecutter-pytest-plugin template.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].