All Projects → smarr → ReBench

smarr / ReBench

Licence: MIT License
Execute and document benchmarks reproducibly.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to ReBench

Reprozip
ReproZip is a tool that simplifies the process of creating reproducible experiments from command-line executions, a frequently-used common denominator in computational science.
Stars: ✭ 231 (+381.25%)
Mutual labels:  science, reproducibility
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+2422.92%)
Mutual labels:  benchmarking, reproducibility
emp
🔬 Empirical CLI
Stars: ✭ 42 (-12.5%)
Mutual labels:  science, benchmarking
blas-benchmarks
Timing results for BLAS (Basic Linear Algebra Subprograms) libraries in R
Stars: ✭ 24 (-50%)
Mutual labels:  benchmarking
git-ghost
Synchronize your working directory efficiently to a remote place without committing the changes.
Stars: ✭ 61 (+27.08%)
Mutual labels:  reproducibility
Atomic-Periodic-Table.Android
Atomic - Periodic Table
Stars: ✭ 33 (-31.25%)
Mutual labels:  science
cplot
🌈 Plot complex functions
Stars: ✭ 75 (+56.25%)
Mutual labels:  science
open-retractions
‼️ 📄 🔍 an API and web interface to check if a paper has been retracted
Stars: ✭ 43 (-10.42%)
Mutual labels:  science
reproducible
A set of tools for R that enhance reproducibility beyond package management
Stars: ✭ 33 (-31.25%)
Mutual labels:  reproducibility
theodolite
Theodolite is a framework for benchmarking the horizontal and vertical scalability of cloud-native applications.
Stars: ✭ 20 (-58.33%)
Mutual labels:  benchmarking
vhive
vHive: Open-source framework for serverless experimentation
Stars: ✭ 134 (+179.17%)
Mutual labels:  benchmarking
witty.er
What is true, thank you, ernestly. A large variant benchmarking tool analogous to hap.py for small variants.
Stars: ✭ 22 (-54.17%)
Mutual labels:  benchmarking
scooby
🐶 🕵️ Great Dane turned Python environment detective
Stars: ✭ 36 (-25%)
Mutual labels:  reproducibility
ControverSciences
ControverSciences est un portail collaboratif qui rassemble les publications scientifiques autour de questions controversées, en les rendant accessible à tous.
Stars: ✭ 14 (-70.83%)
Mutual labels:  science
r10e-ds-py
Reproducible Data Science in Python (SciPy 2019 Tutorial)
Stars: ✭ 12 (-75%)
Mutual labels:  reproducibility
perfume
Interactive performance benchmarking in Jupyter
Stars: ✭ 33 (-31.25%)
Mutual labels:  benchmarking
nlp-qrmine
🔦 Qualitative Research support tools in Python
Stars: ✭ 28 (-41.67%)
Mutual labels:  research-tool
kepler orrery
Make a Kepler orrery gif or movie of all the Kepler multi-planet systems
Stars: ✭ 91 (+89.58%)
Mutual labels:  science
Reproducibilty-Challenge-ECANET
Unofficial Implementation of ECANets (CVPR 2020) for the Reproducibility Challenge 2020.
Stars: ✭ 27 (-43.75%)
Mutual labels:  reproducibility
best
🏆 Delightful Benchmarking & Performance Testing
Stars: ✭ 73 (+52.08%)
Mutual labels:  benchmarking

ReBench: Execute and Document Benchmarks Reproducibly

Build Status PyPI version Documentation Downloads Codacy Quality Coverage DOI

ReBench is a tool to run and document benchmark experiments. Currently, it is mostly used for benchmarking language implementations, but it can be used to monitor the performance of all kinds of other applications and programs, too.

The ReBench configuration format is a text format based on YAML. A configuration file defines how to build and execute a set of experiments, i.e. benchmarks. It describes which executable was used, which parameters were given to the benchmarks, and the number of iterations to be used to obtain statistically reliable results.

With this approach, the configuration contains all benchmark-specific information to reproduce a benchmark run. However, it does not capture the whole system.

The data of all benchmark runs is recorded in a data file for later analysis. Important for long-running experiments, benchmarks can be aborted and continued at a later time.

ReBench focuses on the execution aspect and does not provide advanced analysis facilities itself. Instead, the recorded results should be processed by dedicated tools such as scripts for statistical analysis in R, Python, etc, or Codespeed, for continuous performance tracking.

The documentation for ReBench is hosted at https://rebench.readthedocs.io/.

Goals and Features

ReBench is designed to

  • enable reproduction of experiments;
  • document all benchmark parameters;
  • provide a flexible execution model, with support for interrupting and continuing benchmarking;
  • enable the definition of complex sets of comparisons and their flexible execution;
  • report results to continuous performance monitoring systems, e.g., Codespeed;
  • provide basic support for building/compiling benchmarks/experiments on demand;
  • be extensible to parse output of custom benchmark harnesses.

ReBench Denoise

Denoise configures a Linux system for benchmarking. It adapts parameters of the CPU frequency management and task scheduling to reduce some of the variability that can cause widely different benchmark results for the same experiment.

Denoise is inspired by Krun, which has many more features to carefully minimize possible interference. Krun is the tool of choice if the most reliable results are required. ReBench only adapts a subset of the parameters, while staying self-contained and minimizing external dependencies.

Non-Goals

ReBench isn't

  • a framework for (micro)benchmarks. Instead, it relies on existing harnesses and can be extended to parse their output.
  • a performance analysis tool. It is meant to execute experiments and record the corresponding measurements.
  • a data analysis tool. It provides only a bare minimum of statistics, but has an easily parseable data format that can be processed, e.g., with R.

Installation

ReBench is implemented in Python and can be installed via pip:

pip install rebench

To reduce noise generated by the system, rebench-denoise depends on:

  • sudo rights. rebench will attempt to determine suitable configuration parameters and suggest them. This includes allowing the execution of rebench-denoise via sudo without password.
  • cpuset to reserve cores for benchmarking. On Ubuntu: apt install cpuset

Please note that rebench-denoise is only tested on Ubuntu. It is designed to degrade gracefully and report the expected implications when it cannot adapt system settings. See the docs for details.

Usage

A minimal configuration file looks like this:

# this run definition will be chosen if no parameters are given to rebench
default_experiment: all
default_data_file: 'example.data'

# a set of suites with different benchmarks and possibly different settings
benchmark_suites:
    ExampleSuite:
        gauge_adapter: RebenchLog
        command: Harness %(benchmark)s %(input)s %(variable)s
        input_sizes: [2, 10]
        variable_values:
            - val1
        benchmarks:
            - Bench1
            - Bench2

# a set of executables for the benchmark execution
executors:
    MyBin1:
        path: bin
        executable: test-vm1.py %(cores)s
        cores: [1]
    MyBin2:
        path: bin
        executable: test-vm2.py

# combining benchmark suites and executions
experiments:
    Example:
        suites:
          - ExampleSuite
        executions:
            - MyBin1
            - MyBin2

Saved as test.conf, this configuration could be executed with ReBench as follows:

rebench test.conf

See the documentation for details: https://rebench.readthedocs.io/.

Support and Contributions

In case you encounter issues, please feel free to open an issue so that we can help.

For contributions, we use the normal Github flow of pull requests, discussion, and revisions. For larger contributions, it is likely useful to discuss them upfront in an issue first.

Use in Academia

If you use ReBench for research and in academic publications, please consider citing it.

The preferred citation is:

@misc{ReBench:2018,
  author = {Marr, Stefan},
  doi = {10.5281/zenodo.1311762},
  month = {August},
  note = {Version 1.0},
  publisher = {GitHub},
  title = {ReBench: Execute and Document Benchmarks Reproducibly},
  year = 2018
}

Some publications that have been using ReBench include:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].