All Projects → RafaelGSS → autobench

RafaelGSS / autobench

Licence: other
Benchmark your application on CI

Programming Languages

javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to autobench

S3 Benchmark
Measure Amazon S3's performance from any location.
Stars: ✭ 525 (+3181.25%)
Mutual labels:  benchmark, performance-analysis
best
🏆 Delightful Benchmarking & Performance Testing
Stars: ✭ 73 (+356.25%)
Mutual labels:  benchmark, performance-analysis
serializer-benchmark
A PHP benchmark application to compare PHP serializer libraries
Stars: ✭ 14 (-12.5%)
Mutual labels:  benchmark, performance-analysis
IGUANA
IGUANA is a benchmark execution framework for querying HTTP endpoints and CLI Applications such as Triple Stores. Contact: [email protected]
Stars: ✭ 22 (+37.5%)
Mutual labels:  benchmark, performance-analysis
Crossplatformdisktest
Windows, macOS and Android storage (HDD, SSD, RAM) speed testing/performance benchmarking app
Stars: ✭ 123 (+668.75%)
Mutual labels:  benchmark, performance-analysis
Dbtester
Distributed database benchmark tester
Stars: ✭ 214 (+1237.5%)
Mutual labels:  benchmark, performance-analysis
benchmark-malloc
Trace memory allocations and collect stats
Stars: ✭ 18 (+12.5%)
Mutual labels:  benchmark, performance-analysis
KLUE
📖 Korean NLU Benchmark
Stars: ✭ 420 (+2525%)
Mutual labels:  benchmark
benchbox
🧀 The Benchmark Testing Box
Stars: ✭ 19 (+18.75%)
Mutual labels:  benchmark
2020a SSH mapping NATL60
A challenge on the mapping of satellite altimeter sea surface height data organised by MEOM@IGE, Ocean-Next and CLS.
Stars: ✭ 17 (+6.25%)
Mutual labels:  benchmark
kubernetes-iperf3
Simple wrapper around iperf3 to measure network bandwidth from all nodes of a Kubernetes cluster
Stars: ✭ 80 (+400%)
Mutual labels:  benchmark
BinKit
Binary Code Similarity Analysis (BCSA) Benchmark
Stars: ✭ 54 (+237.5%)
Mutual labels:  benchmark
goanalyzer
improved go tool trace goroutine analysis
Stars: ✭ 30 (+87.5%)
Mutual labels:  performance-analysis
MVP Benchmark
MVP Benchmark for Multi-View Partial Point Cloud Completion and Registration
Stars: ✭ 74 (+362.5%)
Mutual labels:  benchmark
mini-nbody
A simple gravitational N-body simulation in less than 100 lines of C code, with CUDA optimizations.
Stars: ✭ 73 (+356.25%)
Mutual labels:  benchmark
Filipino-Text-Benchmarks
Open-source benchmark datasets and pretrained transformer models in the Filipino language.
Stars: ✭ 22 (+37.5%)
Mutual labels:  benchmark
hyperspectral-soilmoisture-dataset
Hyperspectral and soil-moisture data from a field campaign based on a soil sample. Karlsruhe (Germany), 2017.
Stars: ✭ 23 (+43.75%)
Mutual labels:  benchmark
micro bench
⏰ Dead simple, non intrusive, realtime benchmarks
Stars: ✭ 13 (-18.75%)
Mutual labels:  benchmark
dgraph-bench
A benchmark program for dgraph.
Stars: ✭ 27 (+68.75%)
Mutual labels:  benchmark
react-native-css-in-js-benchmarks
CSS in JS Benchmarks for React Native
Stars: ✭ 46 (+187.5%)
Mutual labels:  benchmark

autobench

NPM version js-standard-style

Automated benchmark avoiding regression in HTTP Applications.

Wrap autocannon and autocannon-compare in a box to automatize and monitor HTTP routes.

Installation

This is a Node.js module available through the npm registry. It can be installed using the npm or yarn command line tools.

npm i autobench

or globally

npm i -g autobench

Usage

autobench
# or directly
npx autobench

Add environment DEBUG=autobench:* to see the log applications. Example:

DEBUG=autobench:debug autobench compare
DEBUG=autobench:info autobench compare
DEBUG=autobench:* autobench compare

Config file

In order to use the autobench, the project must have a autobench.yml as config file.

The config file parameters are described bellow:

# Name of project [OPTIONAL]
name: 'Autobench Example'
# Benchmarking folder to store and retrieve benchmarks. [REQUIRED]
benchFolder: 'bench'
# Root URL to perform the benchmarking. [REQUIRED] It could be sent by `AUTOBENCH_URL` environment variable
url: 'http://localhost:3000'
# Number of connections. See https://github.com/mcollina/autocannon to further explanation. [OPTIONAL]
connections: 10
# Number of pipelining. See https://github.com/mcollina/autocannon to further explanation. [OPTIONAL]
pipelining: 1
# Duration of benchmark. See https://github.com/mcollina/autocannon to further explanation. [OPTIONAL]
duration: 30
# Group of routes to perform benchmarking. [REQUIRED]
benchmarks:
  # Benchmark route name. [REQUIRED]
  - name: 'request 1'
  # Route path. [REQUIRED]
    path: '/'
  # Method [OPTIONAL] - Default `GET`
    method: 'POST'
  # Headers to request [OPTIONAL]
    headers:
      Content-type: 'application/json'
  # Body to request [OPTIONAL] - It's automatically parsed to JSON object.
    body:
      example: 'true'
      email: 'hey-[<id>]@example.com'
  # [OPTIONAL] when this field is set as `true` the `[<id>]` is replaced with a generated HyperID at runtime
    idReplacement: true

  - name: 'request 2'
    path: '/slow'

See autobench.yml file to examples.

Compare

Command to perform benchmark and compare to the stored benchmark. It's required to have a previous benchmark stored in the benchFolder. See Autobench Create to realize it.

Options:

Option Description Full command
-s When is identified a Performance Regression a autobench-review.md file is created with the summary autobench compare -s
autobench compare [-s]

The autobench-review.md looks like:

## Performance Regression ⚠️

---
The previous benchmark for request-1 was significantly performatic than from this PR.

- **Router**: request-1
- **Requests Diff**: 10%
- **Throughput Diff**: 10%
- **Latency Diff**: 10%

---
The previous benchmark for request-2 was significantly performatic than from this PR.

- **Router**: request-2
- **Requests Diff**: 20%
- **Throughput Diff**: 20%
- **Latency Diff**: 20%

Create

Command to store/override the results in the benchFolder. Usually, it should be used to update the to latest benchmarking result, for instance, after each PR merged.

autobench create

Examples

See autobench-example for further details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].