All Projects → disruptek → criterion

disruptek / criterion

Licence: other
statistics-driven micro-benchmarking framework

Programming Languages

nim
578 projects

Projects that are alternatives of or similar to criterion

Node Frameworks Benchmark
Simple HTTP benchmark for different nodejs frameworks using wrk
Stars: ✭ 117 (+588.24%)
Mutual labels:  benchmark, micro
criterion-compare-action
⚡️📊 Compare the performance of Rust project branches
Stars: ✭ 16 (-5.88%)
Mutual labels:  benchmark, criterion
criterion-compare-action
⚡️📊 Compare the performance of Rust project branches
Stars: ✭ 37 (+117.65%)
Mutual labels:  benchmark, criterion
Web Frameworks
Which is the fastest web framework?
Stars: ✭ 6,125 (+35929.41%)
Mutual labels:  benchmark, measurement
Criterion.rs
Statistics-driven benchmarking library for Rust
Stars: ✭ 2,153 (+12564.71%)
Mutual labels:  benchmark, criterion
react-native-startup-time
measure startup time of your react-native app
Stars: ✭ 88 (+417.65%)
Mutual labels:  benchmark, measurement
graphql-benchmarks
GraphQL benchmarks using the-benchmarker framework.
Stars: ✭ 54 (+217.65%)
Mutual labels:  benchmark, measurement
rpc-bench
RPC Benchmark of gRPC, Aeron and KryoNet
Stars: ✭ 59 (+247.06%)
Mutual labels:  benchmark
benchmark-kit
phpbenchmarks.com kit to add your benchmark.
Stars: ✭ 31 (+82.35%)
Mutual labels:  benchmark
react-benchmark
A tool for benchmarking the render performance of React components
Stars: ✭ 99 (+482.35%)
Mutual labels:  benchmark
sets
Benchmarks for set data structures: hash sets, dawg's, bloom filters, etc.
Stars: ✭ 20 (+17.65%)
Mutual labels:  benchmark
map benchmark
Comprehensive benchmarks of C++ maps
Stars: ✭ 132 (+676.47%)
Mutual labels:  benchmark
snowman
Welcome to Snowman App – a Data Matching Benchmark Platform.
Stars: ✭ 25 (+47.06%)
Mutual labels:  benchmark
micro-plugins
go-micro plugins, auth(JWT+Casbin)、go-micro服务加入istio服务网格
Stars: ✭ 27 (+58.82%)
Mutual labels:  micro
CBLUE
中文医疗信息处理基准CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark
Stars: ✭ 379 (+2129.41%)
Mutual labels:  benchmark
httpit
A rapid http(s) benchmark tool written in Go
Stars: ✭ 156 (+817.65%)
Mutual labels:  benchmark
Java-Logging-Framework-Benchmark
Suite for benchmarking Java logging frameworks.
Stars: ✭ 16 (-5.88%)
Mutual labels:  benchmark
LFattNet
Attention-based View Selection Networks for Light-field Disparity Estimation
Stars: ✭ 41 (+141.18%)
Mutual labels:  benchmark
ftsb
Full Text Search Benchmark, a tool for comparing and evaluating full-text search engines.
Stars: ✭ 12 (-29.41%)
Mutual labels:  benchmark
php-simple-benchmark-script
Очень простой скрипт тестирования быстродействия PHP | Very simple script for testing of PHP operations speed (rusoft repo mirror)
Stars: ✭ 50 (+194.12%)
Mutual labels:  benchmark

criterion

Test Matrix GitHub release (latest by date) Minimum supported Nim version License buy me a coffee

A statistics-driven micro-benchmarking framework heavily inspired by the wonderful criterion library for Haskell; originally created by LemonBoy.

Status

Works, API is still not 100% stable yet.

Example

import criterion

var cfg = newDefaultConfig()

benchmark cfg:
  func fib(n: int): int =
    case n
    of 0: 1
    of 1: 1
    else: fib(n-1) + fib(n-2)

  # on nim-1.0 you have to use {.measure: [].} instead
  proc fib5() {.measure.} =
    var n = 5
    blackBox fib(n)

  # ... equivalent to ...

  iterator argFactory(): int =
    for x in [5]:
      yield x

  proc fibN(x: int) {.measure: argFactory.} =
    blackBox fib(x)

  # ... equivalent to ...

  proc fibN1(x: int) {.measure: [5].} =
    blackBox fib(x)

Gives the following output: fib

A bit too much info? Just set cfg.brief = true and the results will be output in a condensed format:

brief

Much easier to parse, isn't it?

If you need to pass more than a single argument to your benchmark fixture just use a tuple: they are automagically unpacked at compile-time.

import criterion

let cfg = newDefaultConfig()

benchmark cfg:
  proc foo(x: int, y: float) {.measure: [(1,1.0),(2,2.0)].} =
    discard x.float + y

Export the measurements

If you need the measurement data in order to compare different benchmarks, to plot the results or to post-process them you can do so by adding a single line to your benchmark setup:

let cfg = newDefaultConfig()

# Your usual config goes here...
cfg.outputPath = "my_benchmark.json"

benchmark(cfg):
  # Your benchmark code goes here...

The data will be dumped once the block has been completed into a json file that's ready for consumption by other tools.

Documentation

See the documentation for the criterion module as generated directly from the source.

More Test Output

The source of the many test.

many

License

MIT

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].