All Projects → composewell → bench-show

composewell / bench-show

Licence: other
Show, plot and compare benchmark results

Programming Languages

haskell
3896 projects

Projects that are alternatives of or similar to bench-show

Benchmarknet
Benchmark for testing the reliable UDP networking solutions
Stars: ✭ 206 (+1371.43%)
Mutual labels:  benchmarking
anybench
CPU Benchmarks Set
Stars: ✭ 54 (+285.71%)
Mutual labels:  benchmarking
chest xray 14
Benchmarks on NIH Chest X-ray 14 dataset
Stars: ✭ 67 (+378.57%)
Mutual labels:  benchmarking
Ali
Generate HTTP load and plot the results in real-time
Stars: ✭ 3,055 (+21721.43%)
Mutual labels:  benchmarking
go-recipes
🦩 Tools for Go projects
Stars: ✭ 2,490 (+17685.71%)
Mutual labels:  benchmarking
nitroml
NitroML is a modular, portable, and scalable model-quality benchmarking framework for Machine Learning and Automated Machine Learning (AutoML) pipelines.
Stars: ✭ 40 (+185.71%)
Mutual labels:  benchmarking
Mangohud
A Vulkan and OpenGL overlay for monitoring FPS, temperatures, CPU/GPU load and more. Discord: https://discordapp.com/invite/Gj5YmBb
Stars: ✭ 2,994 (+21285.71%)
Mutual labels:  benchmarking
plf nanotimer
A simple C++ 03/11/etc timer class for ~microsecond-precision cross-platform benchmarking. The implementation is as limited and as simple as possible to create the lowest amount of overhead.
Stars: ✭ 108 (+671.43%)
Mutual labels:  benchmarking
esperf
ElasticSearch Performance Testing tool
Stars: ✭ 50 (+257.14%)
Mutual labels:  benchmarking
lein-jmh
Leiningen plugin for jmh-clojure
Stars: ✭ 17 (+21.43%)
Mutual labels:  benchmarking
Pyperform
An easy and convienent way to performance test python code.
Stars: ✭ 221 (+1478.57%)
Mutual labels:  benchmarking
fahbench
Folding@home GPU benchmark
Stars: ✭ 32 (+128.57%)
Mutual labels:  benchmarking
dyngen
Simulating single-cell data using gene regulatory networks 📠
Stars: ✭ 59 (+321.43%)
Mutual labels:  benchmarking
Profilinggo
A quick tour (or reminder) of Go performance tools
Stars: ✭ 219 (+1464.29%)
Mutual labels:  benchmarking
ldbc snb docs
Specification of the LDBC Social Network Benchmark suite
Stars: ✭ 39 (+178.57%)
Mutual labels:  benchmarking
Node Ab
A command tool to test the performance of HTTP services.
Stars: ✭ 200 (+1328.57%)
Mutual labels:  benchmarking
cb-tumblebug
Cloud-Barista Multi-Cloud Infra Management Framework
Stars: ✭ 33 (+135.71%)
Mutual labels:  benchmarking
benchmark-thrift
An open source application designed to load test Thrift applications
Stars: ✭ 41 (+192.86%)
Mutual labels:  benchmarking
ILAMB
Python software used in the International Land Model Benchmarking (ILAMB) project
Stars: ✭ 28 (+100%)
Mutual labels:  benchmarking
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+8550%)
Mutual labels:  benchmarking

bench-show

Hackage Github CI Build Status Appveyor Build status

Generate text reports and graphical charts from the benchmark results generated by gauge or criterion and stored in a CSV file. This tool is especially useful when you have many benchmarks or if you want to compare benchmarks across multiple packages. You can generate many interesting reports including:

  • Show individual reports for all the fields measured e.g. time taken, peak memory usage, allocations, among many other fields measured by gauge
  • Sort benchmark results on a specified criterion e.g. you may want to see the biggest cpu hoggers or biggest memory hoggers on top
  • Across two benchmark runs (e.g. before and after a change), show all the operations that resulted in a regression of more than x% in descending order, so that we can quickly identify and fix performance problems in our application.
  • Across two (or more) packages providing similar functionality, show all the operations where the performance differs by more than 10%, so that we can critically analyze the packages and choose the right one.

Quick Start

Use gauge or criterion to generate a results.csv file, and then use either the bench-show executable or the library APIs to generate textual or graphical reports.

Executable

Use bench-show executable with report and graph sub-commands:

$ bench-show report results.csv
$ bench-show graph results.csv output

For advanced usage, control the generated report by the CLI flags.

Library

Use report and graph library functions:

report "results.csv"  Nothing defaultConfig
graph  "results.csv" "output" defaultConfig

For advanced usage, control the generated report by modifying the defaultConfig.

Reports and Charts

report with Fields presentation style generates a multi-column report. We can select many fields from a gauge raw report. Units of the fields are automatically determined based on the range of values:

$ bench-show --presentation Fields report results.csv
report "results.csv" Nothing defaultConfig { presentation = Fields }
Benchmark     time(μs) maxrss(MiB)
------------- -------- -----------
vector/fold     641.62        2.75
streamly/fold   639.96        2.75
vector/map      638.89        2.72
streamly/map    653.36        2.66
vector/zip      651.42        2.58
streamly/zip    644.33        2.59

graph generates one bar chart per field:

$ bench-show --presentation Fields graph results.csv
graph "results.csv" "output" defaultConfig

When the input file contains results from a single benchmark run, by default all the benchmarks are placed in a single benchmark group named "default".

Median Time Grouped

Grouping

Let's write a benchmark classifier to put the streamly and vector benchmarks in their own groups:

   classifier name =
       case splitOn "/" name of
           grp : bench -> Just (grp, concat bench)
           _          -> Nothing

Now we can show the two benchmark groups as separate columns. We can generate reports comparing different benchmark fields (e.g. time and maxrss) for all the groups:

   report "results.csv" Nothing
     defaultConfig { classifyBenchmark = classifier }
(time)(Median)
Benchmark streamly(μs) vector(μs)
--------- ------------ ----------
fold            639.96     641.62
map             653.36     638.89
zip             644.33     651.42

We can do the same graphically as well, just replace report with graph in the code above. Each group is placed as a cluster on the graph. Multiple clusters are placed side by side (i.e. on the same scale) for easy comparison. For example:

Median Time Grouped

Regression, Percentage Difference and Sorting

We can append benchmarks results from multiple runs to the same file. These runs can then be compared. We can run benchmarks before and after a change and then report the regressions by percentage change in a sorted order:

Given a results file with two runs, this code generates the report that follows:

   report "results.csv" Nothing
     defaultConfig
         { classifyBenchmark = classifier
         , presentation = Groups PercentDiff
         , selectBenchmarks = \f ->
              reverse
              $ map fst
              $ sortBy (comparing snd)
              $ either error id $ f (ColumnIndex 1) Nothing
         }
(time)(Median)(Diff using min estimator)
Benchmark streamly(0)(μs)(base) streamly(1)(%)(-base)
--------- --------------------- ---------------------
zip                      644.33                +23.28
map                      653.36                 +7.65
fold                     639.96                -15.63

It tells us that in the second run the worst affected benchmark is zip taking 23.28 percent more time compared to the baseline.

Graphically:

Median Time Regression

Full Documentation and examples

Contributions and Feedback

Contributions are welcome! Please see the TODO.md file or the existing issues if you want to pick up something to work on.

Any feedback on improvements or the direction of the package is welcome. You can always send an email to the maintainer or raise an issue for anything you want to suggest or discuss, or send a PR for any change that you would like to make.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].