martinus / Nanobench
Licence: mit
Simple, fast, accurate single-header microbenchmarking functionality for C++11/14/17/20
Stars: ✭ 436
Labels
Projects that are alternatives of or similar to Nanobench
Face Landmarks Detection Benchmark
Face landmarks(fiducial points) detection benchmark
Stars: ✭ 348 (-20.18%)
Mutual labels: benchmark
Sysbench
Scriptable database and system performance benchmark
Stars: ✭ 4,268 (+878.9%)
Mutual labels: benchmark
Human Learn
Natural Intelligence is still a pretty good idea.
Stars: ✭ 323 (-25.92%)
Mutual labels: benchmark
Pcam
The PatchCamelyon (PCam) deep learning classification benchmark.
Stars: ✭ 340 (-22.02%)
Mutual labels: benchmark
Modclean
Remove unwanted files and directories from your node_modules folder
Stars: ✭ 309 (-29.13%)
Mutual labels: benchmark
Tf to trt image classification
Image classification with NVIDIA TensorRT from TensorFlow models.
Stars: ✭ 427 (-2.06%)
Mutual labels: benchmark
Medmnist
[ISBI'21] MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis
Stars: ✭ 338 (-22.48%)
Mutual labels: benchmark
Benchmarks Of Javascript Package Managers
Benchmarks of JavaScript Package Managers
Stars: ✭ 388 (-11.01%)
Mutual labels: benchmark
Deeperforensics 1.0
[CVPR 2020] A Large-Scale Dataset for Real-World Face Forgery Detection
Stars: ✭ 338 (-22.48%)
Mutual labels: benchmark
Yet Another Bench Script
YABS - a simple bash script to estimate Linux server performance using fio, iperf3, & Geekbench
Stars: ✭ 348 (-20.18%)
Mutual labels: benchmark
Blurtestandroid
This is a simple App to test some blur algorithms on their visual quality and performance.
Stars: ✭ 396 (-9.17%)
Mutual labels: benchmark
Layoutframeworkbenchmark
Benchmark the performances of various Swift layout frameworks (autolayout, UIStackView, PinLayout, LayoutKit, FlexLayout, Yoga, ...)
Stars: ✭ 316 (-27.52%)
Mutual labels: benchmark
Ffi Overhead
comparing the c ffi (foreign function interface) overhead on various programming languages
Stars: ✭ 387 (-11.24%)
Mutual labels: benchmark
ankerl::nanobench
ankerl::nanobench
is a platform independent microbenchmarking library for C++11/14/17/20.
#define ANKERL_NANOBENCH_IMPLEMENT
#include <nanobench.h>
int main() {
double d = 1.0;
ankerl::nanobench::Bench().run("some double ops", [&] {
d += 1.0 / d;
if (d > 5.0) {
d -= 5.0;
}
ankerl::nanobench::doNotOptimizeAway(d);
});
}
The whole executable runs for ~60ms and prints
| ns/op | op/s | err% | ins/op | cyc/op | IPC | bra/op | miss% | total | benchmark
|--------------------:|--------------------:|--------:|----------------:|----------------:|-------:|---------------:|--------:|----------:|:----------
| 7.52 | 132,948,239.79 | 1.1% | 6.65 | 24.07 | 0.276 | 1.00 | 8.9% | 0.00 | `some double ops`
Which github renders as
ns/op | op/s | err% | ins/op | cyc/op | IPC | bra/op | miss% | total | benchmark |
---|---|---|---|---|---|---|---|---|---|
7.52 | 132,948,239.79 | 1.1% | 6.65 | 24.07 | 0.276 | 1.00 | 8.9% | 0.00 | some double ops |
The benchmarked code takes 7.52 nanoseconds to run, so ~133 million times per seconds. Measurements fluctuate by
1.1%. On average 6.65 instructions are executed in 24.07 CPU cycles, resulting in 0.276 instructions per
second. A single branch is in the code, which branch prediction missed in 8.9% of the cases. Total runtime of
the benchmark with the name some double ops
is 0.00, so just a few milliseconds.
Design Goals
- Ease of use: Simple & powerful API, fast compile times, easy to integrate anywhere.
- Fast: Get accurate results as fast as possible. nanobench is ~80 times faster than google benchmark.
- Accurate: Get deterministic, repeatable, and accurate results that you can make sound decisions on.
- Robust: Be robust against outliers, warn if results are not reliable.
Documentation
Extensive documentation is available.
More
- Code of Conduct - Contributor Covenant Code of Conduct
- I need a better logo. Currently I use a small bench. Nanobench. Ha ha.
Note that the project description data, including the texts, logos, images, and/or trademarks,
for each open source project belongs to its rightful owner.
If you wish to add or remove any projects, please contact us at [email protected].