All Projects → ionelmc → Pytest Benchmark

ionelmc / Pytest Benchmark

Licence: bsd-2-clause
py.test fixture for benchmarking code

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytest Benchmark

Bench Scripts
A compilation of Linux server benchmarking scripts.
Stars: ✭ 873 (+19.59%)
Mutual labels:  performance, benchmark, benchmarking
Sysbench Docker Hpe
Sysbench Dockerfiles and Scripts for VM and Container benchmarking MySQL
Stars: ✭ 14 (-98.08%)
Mutual labels:  performance, benchmark, benchmarking
Benchmarkdotnet
Powerful .NET library for benchmarking
Stars: ✭ 7,138 (+877.81%)
Mutual labels:  performance, benchmark, benchmarking
Pytest Django Queries
Generate performance reports from your django database performance tests.
Stars: ✭ 54 (-92.6%)
Mutual labels:  benchmark, benchmarking, pytest
Phoronix Test Suite
The Phoronix Test Suite open-source, cross-platform automated testing/benchmarking software.
Stars: ✭ 1,339 (+83.42%)
Mutual labels:  performance, benchmark, benchmarking
Jsperf.com
jsperf.com v2. https://github.com/h5bp/lazyweb-requests/issues/174
Stars: ✭ 1,178 (+61.37%)
Mutual labels:  performance, benchmark, benchmarking
Web Tooling Benchmark
JavaScript benchmark for common web developer workloads
Stars: ✭ 290 (-60.27%)
Mutual labels:  performance, benchmark, benchmarking
Sltbench
C++ benchmark tool. Practical, stable and fast performance testing framework.
Stars: ✭ 137 (-81.23%)
Mutual labels:  performance, benchmark, benchmarking
Ezfio
Simple NVME/SAS/SATA SSD test framework for Linux and Windows
Stars: ✭ 91 (-87.53%)
Mutual labels:  performance, benchmark, benchmarking
Karma Benchmark
A Karma plugin to run Benchmark.js over multiple browsers with CI compatible output.
Stars: ✭ 88 (-87.95%)
Mutual labels:  performance, benchmark, benchmarking
Gatling Dubbo
A gatling plugin for running load tests on Apache Dubbo(https://github.com/apache/incubator-dubbo) and other java ecosystem.
Stars: ✭ 131 (-82.05%)
Mutual labels:  performance, benchmark, benchmarking
Are We Fast Yet
Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
Stars: ✭ 161 (-77.95%)
Mutual labels:  performance, benchmark, benchmarking
beapi-bench
Tool for benchmarking apis. Uses ApacheBench(ab) to generate data and gnuplot for graphing. Adding new features almost daily
Stars: ✭ 16 (-97.81%)
Mutual labels:  benchmarking, benchmark
LuaJIT-Benchmarks
LuaJIT Benchmark tests
Stars: ✭ 20 (-97.26%)
Mutual labels:  benchmarking, benchmark
bench
⏱️ Reliable performance measurement for Go programs. All in one design.
Stars: ✭ 33 (-95.48%)
Mutual labels:  benchmarking, benchmark
Unchase.FluentPerformanceMeter
🔨 Make the exact performance measurements of the public methods for public classes using this NuGet Package with fluent interface. Requires .Net Standard 2.0+. It is an Open Source project under Apache-2.0 License.
Stars: ✭ 33 (-95.48%)
Mutual labels:  benchmarking, benchmark
python-pytest-harvest
Store data created during your `pytest` tests execution, and retrieve it at the end of the session, e.g. for applicative benchmarking purposes.
Stars: ✭ 44 (-93.97%)
Mutual labels:  benchmark, pytest
CARLA
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
Stars: ✭ 166 (-77.26%)
Mutual labels:  benchmarking, benchmark
Frameworkbenchmarks
Source for the TechEmpower Framework Benchmarks project
Stars: ✭ 6,157 (+743.42%)
Mutual labels:  performance, benchmark
best
🏆 Delightful Benchmarking & Performance Testing
Stars: ✭ 73 (-90%)
Mutual labels:  benchmarking, benchmark

======== Overview

.. start-badges

.. list-table:: :stub-columns: 1

* - docs
  - |docs| |gitter|
* - tests
  - | |travis| |appveyor| |requires|
    | |coveralls| |codecov|
* - package
  - | |version| |wheel| |supported-versions| |supported-implementations|
    | |commits-since|

.. |docs| image:: https://readthedocs.org/projects/pytest-benchmark/badge/?style=flat :target: https://readthedocs.org/projects/pytest-benchmark :alt: Documentation Status

.. |gitter| image:: https://badges.gitter.im/ionelmc/pytest-benchmark.svg :alt: Join the chat at https://gitter.im/ionelmc/pytest-benchmark :target: https://gitter.im/ionelmc/pytest-benchmark

.. |travis| image:: https://api.travis-ci.com/ionelmc/pytest-benchmark.svg?branch=master :alt: Travis-CI Build Status :target: https://travis-ci.com/github/ionelmc/pytest-benchmark

.. |appveyor| image:: https://ci.appveyor.com/api/projects/status/github/ionelmc/pytest-benchmark?branch=master&svg=true :alt: AppVeyor Build Status :target: https://ci.appveyor.com/project/ionelmc/pytest-benchmark

.. |requires| image:: https://requires.io/github/ionelmc/pytest-benchmark/requirements.svg?branch=master :alt: Requirements Status :target: https://requires.io/github/ionelmc/pytest-benchmark/requirements/?branch=master

.. |coveralls| image:: https://coveralls.io/repos/ionelmc/pytest-benchmark/badge.svg?branch=master&service=github :alt: Coverage Status :target: https://coveralls.io/r/ionelmc/pytest-benchmark

.. |codecov| image:: https://codecov.io/gh/ionelmc/pytest-benchmark/branch/master/graphs/badge.svg?branch=master :alt: Coverage Status :target: https://codecov.io/github/ionelmc/pytest-benchmark

.. |version| image:: https://img.shields.io/pypi/v/pytest-benchmark.svg :alt: PyPI Package latest release :target: https://pypi.org/project/pytest-benchmark

.. |wheel| image:: https://img.shields.io/pypi/wheel/pytest-benchmark.svg :alt: PyPI Wheel :target: https://pypi.org/project/pytest-benchmark

.. |supported-versions| image:: https://img.shields.io/pypi/pyversions/pytest-benchmark.svg :alt: Supported versions :target: https://pypi.org/project/pytest-benchmark

.. |supported-implementations| image:: https://img.shields.io/pypi/implementation/pytest-benchmark.svg :alt: Supported implementations :target: https://pypi.org/project/pytest-benchmark

.. |commits-since| image:: https://img.shields.io/github/commits-since/ionelmc/pytest-benchmark/v3.2.3.svg :alt: Commits since latest release :target: https://github.com/ionelmc/pytest-benchmark/compare/v3.2.3...master

.. end-badges

A pytest fixture for benchmarking code. It will group the tests into rounds that are calibrated to the chosen timer.

See calibration_ and FAQ_.

  • Free software: BSD 2-Clause License

Installation

::

pip install pytest-benchmark

Documentation

For latest release: pytest-benchmark.readthedocs.org/en/stable <http://pytest-benchmark.readthedocs.org/en/stable/>_.

For master branch (may include documentation fixes): pytest-benchmark.readthedocs.io/en/latest <http://pytest-benchmark.readthedocs.io/en/latest/>_.

Examples

But first, a prologue:

This plugin tightly integrates into pytest. To use this effectively you should know a thing or two about pytest first. 
Take a look at the `introductory material <http://docs.pytest.org/en/latest/getting-started.html>`_ 
or watch `talks <http://docs.pytest.org/en/latest/talks.html>`_.

Few notes:

* This plugin benchmarks functions and only that. If you want to measure block of code
  or whole programs you will need to write a wrapper function.
* In a test you can only benchmark one function. If you want to benchmark many functions write more tests or 
  use `parametrization <http://docs.pytest.org/en/latest/parametrize.html>`.
* To run the benchmarks you simply use `pytest` to run your "tests". The plugin will automatically do the 
  benchmarking and generate a result table. Run ``pytest --help`` for more details.

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.

Example:

.. code-block:: python

def something(duration=0.000001):
    """
    Function that needs some serious benchmarking.
    """
    time.sleep(duration)
    # You may return anything you want, like the result of a computation
    return 123

def test_my_stuff(benchmark):
    # benchmark something
    result = benchmark(something)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 123

You can also pass extra arguments:

.. code-block:: python

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.02)

Or even keyword arguments:

.. code-block:: python

def test_my_stuff(benchmark):
    benchmark(time.sleep, duration=0.02)

Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient:

.. code-block:: python

def test_my_stuff(benchmark):
    @benchmark
    def something():  # unnecessary function call
        time.sleep(0.000001)

A better way is to just benchmark the final function:

.. code-block:: python

def test_my_stuff(benchmark):
    benchmark(time.sleep, 0.000001)  # way more accurate results!

If you need to do fine control over how the benchmark is run (like a setup function, exact control of iterations and rounds) there's a special mode - pedantic_:

.. code-block:: python

def my_special_setup():
    ...

def test_with_setup(benchmark):
    benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100)

Screenshots

Normal run:

.. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot.png :alt: Screenshot of pytest summary

Compare mode (--benchmark-compare):

.. image:: https://github.com/ionelmc/pytest-benchmark/raw/master/docs/screenshot-compare.png :alt: Screenshot of pytest summary in compare mode

Histogram (--benchmark-histogram):

.. image:: https://cdn.rawgit.com/ionelmc/pytest-benchmark/94860cc8f47aed7ba4f9c7e1380c2195342613f6/docs/sample-tests_test_normal.py_test_xfast_parametrized%5B0%5D.svg :alt: Histogram sample

..

Also, it has `nice tooltips <https://cdn.rawgit.com/ionelmc/pytest-benchmark/master/docs/sample.svg>`_.

Development

To run the all tests run::

tox

Credits

.. _FAQ: http://pytest-benchmark.readthedocs.org/en/latest/faq.html .. _calibration: http://pytest-benchmark.readthedocs.org/en/latest/calibration.html .. _pedantic: http://pytest-benchmark.readthedocs.org/en/latest/pedantic.html

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].