All Projects β†’ P403n1x87 β†’ Pytest Austin

P403n1x87 / Pytest Austin

Licence: gpl-3.0
Python Performance Testing with Austin

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytest Austin

Profilinggo
A quick tour (or reminder) of Go performance tools
Stars: ✭ 219 (+102.78%)
Mutual labels:  performance, profiling
Flamebearer
Blazing fast flame graph tool for V8 and Node πŸ”₯
Stars: ✭ 1,485 (+1275%)
Mutual labels:  performance, profiling
Quickperf
QuickPerf is a testing library for Java to quickly evaluate and improve some performance-related properties
Stars: ✭ 231 (+113.89%)
Mutual labels:  performance, profiling
Tracy
C++ frame profiler
Stars: ✭ 3,115 (+2784.26%)
Mutual labels:  performance, profiling
Tracetools
Tools to process Go trace logs into various profiles. Complement for "go tool trace".
Stars: ✭ 105 (-2.78%)
Mutual labels:  performance, profiling
Myperf4j
High performance Java APM. Powered by ASM. Try it. Test it. If you feel its better, use it.
Stars: ✭ 2,281 (+2012.04%)
Mutual labels:  performance, profiling
Pyroscope
Continuous Profiling Platform! Debug performance issues down to a single line of code
Stars: ✭ 4,816 (+4359.26%)
Mutual labels:  performance, profiling
Home
Project Glimpse: Node Edition - Spend less time debugging and more time developing.
Stars: ✭ 260 (+140.74%)
Mutual labels:  performance, profiling
Etrace
Emacs Lisp Latency Tracing for the Chromium Catapult Trace Event Format
Stars: ✭ 45 (-58.33%)
Mutual labels:  performance, profiling
Processhacker
A free, powerful, multi-purpose tool that helps you monitor system resources, debug software and detect malware.
Stars: ✭ 6,285 (+5719.44%)
Mutual labels:  performance, profiling
Hotspot
The Linux perf GUI for performance analysis.
Stars: ✭ 2,415 (+2136.11%)
Mutual labels:  performance, profiling
Karma Benchmark
A Karma plugin to run Benchmark.js over multiple browsers with CI compatible output.
Stars: ✭ 88 (-18.52%)
Mutual labels:  performance, profiling
Fgprof
πŸš€ fgprof is a sampling Go profiler that allows you to analyze On-CPU as well as Off-CPU (e.g. I/O) time together.
Stars: ✭ 1,944 (+1700%)
Mutual labels:  performance, profiling
React Perf Devtool
A browser developer tool extension to inspect performance of React components.
Stars: ✭ 2,277 (+2008.33%)
Mutual labels:  performance, profiling
Stagemonitor
an open source solution to application performance monitoring for java server applications
Stars: ✭ 1,664 (+1440.74%)
Mutual labels:  performance, profiling
Trace Nodejs
Trace is a visualised distributed tracing platform designed for microservices.
Stars: ✭ 471 (+336.11%)
Mutual labels:  performance, profiling
Observer cli
Visualize Erlang/Elixir Nodes On The Command Line
Stars: ✭ 1,058 (+879.63%)
Mutual labels:  performance, profiling
Phoronix Test Suite
The Phoronix Test Suite open-source, cross-platform automated testing/benchmarking software.
Stars: ✭ 1,339 (+1139.81%)
Mutual labels:  performance, profiling
Performance Handbook
Webγƒ•γƒ­γƒ³γƒˆγ‚¨γƒ³γƒ‰ γƒ‘γƒ•γ‚©γƒΌγƒžγƒ³γ‚Ήζ”Ήε–„γƒγƒ³γƒ‰γƒ–γƒƒγ‚―
Stars: ✭ 99 (-8.33%)
Mutual labels:  performance
Pg flame
A flamegraph generator for Postgres EXPLAIN ANALYZE output.
Stars: ✭ 1,391 (+1187.96%)
Mutual labels:  performance


pytest-austin

Python Performance Testing with Austin

         

GitHub Actions: Tests Travis CI Codecov PyPI LICENSE

Synopsis β€’ Installation β€’ Usage β€’ Compatibility β€’ Contribute

Buy Me A Coffee

Synopsis

The pytest-austin plugin for pytest brings Python performance testing right into your CI pipelines. It uses Austin to profile your test runs without any instrumentation. All you have to do is simply mark the tests on which you want to execute checks for preventing performance regressions.

import pytest


@pytest.mark.total_time(td(milliseconds=50), function="fibonacci")
@pytest.mark.total_time("99%", line=8)
@pytest.mark.total_time("50.3141592653%", line=9)
def test_hello_default():
    fibonacci(27)
    fibonacci(25)

Any failed tests will be reported by pytest at the end of the session. All the collected statistics are written on a file prefixed with austin_ and followed by a truncated timestamp, inside pytest rootdir. You can drop it onto Speedscope for a quick visual representation of your tests.

================================================================ test session starts ================================================================
platform linux -- Python 3.6.9, pytest-6.0.1, py-1.9.0, pluggy-0.13.1 -- /home/gabriele/.cache/pypoetry/virtualenvs/pytest-austin-yu27Ep_e-py3.6/bin/python3.6
cachedir: .pytest_cache
rootdir: /tmp/pytest-of-gabriele/pytest-226/test_austin_time_checks0
plugins: cov-2.10.0, austin-0.1.0
collecting ... collected 3 items

test_austin_time_checks.py::test_lines PASSED
test_austin_time_checks.py::test_check_fails PASSED
test_austin_time_checks.py::test_check_succeeds PASSED

=================================================================== Austin report ===================================================================
austin 2.0.0
Collected stats written on /tmp/pytest-of-gabriele/pytest-226/test_austin_time_checks0/.austin_97148135487643.aprof

πŸ•‘ Sampling time (min/avg/max) : 376/3327/18019 ΞΌs
🐒 Long sampling rate : 87/87 (100.00 %) samples took longer than the sampling interval
πŸ’€ Error rate : 0/87 (0.00 %) invalid samples

test_austin_time_checks.py::test_lines test_lines:19 (test_austin_time_checks.py) -16.0 ms (-78.2% of 20.5 ms)
test_austin_time_checks.py::test_lines test_lines:18 (test_austin_time_checks.py) -4.1 ms (-10.1% of 40.3 ms)
test_austin_time_checks.py::test_lines fibonacci (test_austin_time_checks.py) -9.3 ms (-18.6% of 50.0 ms)
test_austin_time_checks.py::test_check_fails test_check_fails (test_austin_time_checks.py) +99.8 ms (9978.6% of 1000.0 ΞΌs)
test_austin_time_checks.py::test_check_succeeds test_check_succeeds (test_austin_time_checks.py) -9.0 ms (-8.1% of 110.0 ms)

================================================================== 1 check failed ===================================================================

================================================================= 3 passed in 0.35s =================================================================

Installation

pytest-austin can be installed directly from PyPI with

pip install pytest-austin --upgrade

NOTE In order for the plugin to work, the Austin binary needs to be on the PATH environment variable. See Austin installation instructions to see how you can easily install Austin on your platform.

For platform-specific issues and remarks, refer to the Compatibility section below.

Usage

Once installed, the plugin will try to attach Austin to the pytest process in order to sample it every time you run pytest. If you want to prevent Austin from profiling your tests, you have to steal its mojo. You can do so with the --steal-mojo command line argument.

Time checks

The plugin looks for the total_time marker on collected test items, which takes a mandatory argument time and three optional ones: function, module and line.

If you simply want to check that the duration of a test item doesn't take longer than time, you can mark it with @pytest.mark.total_time(time). Here, time can either be a float (in seconds) or an instance of datetime.timedelta.

from datetime import timedelta as td

import pytest


@pytest.mark.total_time(td(milliseconds=50))
def test_hello_default():
    ...

In some cases, you would want to make sure that a function or method called on a certain line in your test script executes in under a certain amount of time, say 5% of the total test time. You can achieve this like so

import pytest


@pytest.mark.total_time("5%", line=7)
def test_hello_default():
    somefunction()
    fastfunction()  # <- this is line no. 7 in the test script
    someotherfunction()

In many cases, however, one would want to test that a function or a method called either directly or indirectly by a test doesn't take more than a certain overall time to run. This is where the remaining arguments of the total_test marker come into play. Suppose that you want to profile the procedure bar that is called by method foo of an object of type Snafu. To ensure that bar doesn't take longer than, say, 50% of the overall test duration, you can write

import pytest


@pytest.mark.total_time("50%", function="bar")
def test_snafu():
    ...
    snafu = Snafu()
    ...
    snafu.foo()
    ...

You can use the module argument to resolve function name clashes. For example, if the definition of the function/method bar occurs within the modules somemodule.py and someothermodule.py, but you are only interested in the one defined in somemodule.py, you can change the above into

import pytest


@pytest.mark.total_time("50%", function="bar", module="somemodule.py")
def test_snafu():
    ...
    snafu = Snafu()
    ...
    snafu.foo()
    ...

And whilst you can also specify a line number, this is perhaps not very handy and practical outside of test scripts themselves, unless the content of the module is stable enough that line numbers don't need to be updated very frequently.

When the pluing runs, it will produce an output containing lines of the form

test_austin_time_checks.py::test_lines test_lines:19 (test_austin_time_checks.py) -16.0 ms (-78.2% of 20.5 ms)

In this case, a negative number, such as -16.0 ms, indicates that the total time spent on test_lines:19 (test_austin_time_checks.py) was 16.0 ms less than the total allowed time specified with the total_time marker, which in this example is 20.5 ms. Hence, negative numbers indicate a successful check.

Failing tests will have a positive delta reported, e.g.

test_austin_time_checks.py::test_check_fails test_check_fails (test_austin_time_checks.py) +99.8 ms (9978.6% of 1000.0 ΞΌs)

This indicates that the total time spent on test_check_fails (test_austin_time_checks.py) was 99.8 ms more than the required threshold, which was set to 1 ms.

Memory checks

One can perform memory allocation checks with the total_memory marker. The first argument is size, which can be a percentage of the total memory allocation of the marked test case, as well as an absolute measure of the maximum amount of memory, e.g., "24 MB". The function, module and line are the same as for the total_time marker. The extra net argument can be set to True to check for the total net memory usage, that is the difference between memory allocations and deallocations.

import pytest


@pytest.mark.total_memory("24 MB")
def test_snafu():
    allota_memory()

In order to perform memory checks, you need to specify either the memory or all profile mode via the --profile-mode option.

The negative and positive memory deltas reported by the plugin in the report behave like the time deltas described in the previous section. That is, a negative memory delta indicates a successful check, whereas a positive delta indicates a check that has failed.

Mixed checks

When in the all profile mode, you can perform both time and memory checks by stacking total_time and total_memory markers.

import pytest


@pytest.mark.total_time(5.15)
@pytest.mark.total_memory("24 MB")
def test_snafu():
    allota_memory_and_time()

Multi-processing

If your tests spawn other Python processes, you can ask pytest-austin to profile them too with the --minime option. Note that if your tests are spawning too many non-Python processes, the sampling rate might be affected because of the way that Austin tries to discover Python child processes.

Reporting

This plugins generate a report on terminal and dumps the collected profiling statistics on the file system as well, for later analysis and visualisation. The verbosity of the terminal report can be controlled with the --austin-report option. By default, it is set to minimal, which means that only checks that have failed will be reported. Use full to see the results for all the checks that have been detected and executed by the plugin.

Regarding the dump of the profiling statistics, the generated file is in the Austin format by default (this is a generalisation of the collapsed stack format). If you want the plugin to dump the data in either the pprof or speedscope format, you can set the --profile-format option accordingly.

Compatibility

This plugin has been tested on Linux, MacOS and Windows. Given that it relies on Austin for sampling the frame stacks of the pytest process, its compatibility considerations apply to pytest-austin as well.

On Linux, the use of sudo is required, unless the CAP_SYS_PTRACE capability is granted to the Austin binary with, e.g.

sudo setcap cap_sys_ptrace+ep `which austin`

Then the use of sudo is no longer required to allow Austin to attach and sample pytest.

On MacOS, the use of sudo is also mandatory, unless the user that is invoking pytest belongs to the procmod group.

Contribute

If you like pytest-austin and you find it useful, there are ways for you to contribute.

If you want to help with the development, then have a look at the open issues and have a look at the contributing guidelines before you open a pull request.

You can also contribute to the development of the pytest-austin by becoming a sponsor and/or by buying me a coffee on BMC or by chipping in a few pennies on PayPal.Me.

Buy Me A Coffee

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].