All Projects → p-ranav → Criterion

p-ranav / Criterion

Licence: mit
Microbenchmarking for Modern C++

Programming Languages

cpp17
186 projects

Projects that are alternatives of or similar to Criterion

Jsonexport
{} → 📄 it's easy to convert JSON to CSV
Stars: ✭ 208 (+48.57%)
Mutual labels:  json, csv, export
Json table
Flutter package: Json Table Widget to create table from json array
Stars: ✭ 178 (+27.14%)
Mutual labels:  json, library, table
Zui
⬢ Zsh User Interface library – CGI+DHTML-like rapid application development with Zsh
Stars: ✭ 95 (-32.14%)
Mutual labels:  console-application, console, library
Sheetjs
📗 SheetJS Community Edition -- Spreadsheet Data Toolkit
Stars: ✭ 28,479 (+20242.14%)
Mutual labels:  json, csv, table
Pytablewriter
pytablewriter is a Python library to write a table in various formats: CSV / Elasticsearch / HTML / JavaScript / JSON / LaTeX / LDJSON / LTSV / Markdown / MediaWiki / NumPy / Excel / Pandas / Python / reStructuredText / SQLite / TOML / TSV.
Stars: ✭ 422 (+201.43%)
Mutual labels:  json, csv, table
Towel
Throw in the towel.
Stars: ✭ 333 (+137.86%)
Mutual labels:  console, library, measurements
Csvreader
csvreader library / gem - read tabular data in the comma-separated values (csv) format the right way (uses best practices out-of-the-box with zero-configuration)
Stars: ✭ 169 (+20.71%)
Mutual labels:  json, csv, export
Leopotamgrouplibraryunity
Tools library for unity 3d game engine: animator graph helpers, serialization (json), localization, event routing (eventbus, ui actions), embedded scripting, uGui xml markup, threading, tweening, in-memory protection and other helpers (pure C#)
Stars: ✭ 373 (+166.43%)
Mutual labels:  json, csv, mit
Terminaltables
Generate simple tables in terminals from a nested list of strings.
Stars: ✭ 685 (+389.29%)
Mutual labels:  console, library, table
Libcon.ahk
LibCon - AutoHotkey Library For Console Support
Stars: ✭ 50 (-64.29%)
Mutual labels:  console-application, console, library
Cloudcmd
✨☁️📁✨ Cloud Commander file manager for the web with console and editor.
Stars: ✭ 1,332 (+851.43%)
Mutual labels:  console, mit
Ssl Checker
Python script that collects SSL/TLS information from hosts
Stars: ✭ 94 (-32.86%)
Mutual labels:  json, csv
Iso 3166 Countries With Regional Codes
ISO 3166-1 country lists merged with their UN Geoscheme regional codes in ready-to-use JSON, XML, CSV data sets
Stars: ✭ 1,372 (+880%)
Mutual labels:  json, csv
Hledger
A reliable, user-friendly Plain Text Accounting tool with command line, terminal and web interfaces.
Stars: ✭ 1,887 (+1247.86%)
Mutual labels:  console-application, library
Just Dashboard
📊 📋 Dashboards using YAML or JSON files
Stars: ✭ 1,511 (+979.29%)
Mutual labels:  json, csv
Filecontextcore
FileContextCore is a "Database"-Provider for Entity Framework Core and adds the ability to store information in files instead of being limited to databases.
Stars: ✭ 91 (-35%)
Mutual labels:  json, csv
Administrative Divisions Of China
中华人民共和国行政区划:省级(省份直辖市自治区)、 地级(城市)、 县级(区县)、 乡级(乡镇街道)、 村级(村委会居委会) ,中国省市区镇村二级三级四级五级联动地址数据。
Stars: ✭ 11,727 (+8276.43%)
Mutual labels:  json, csv
Nload
Real-time network traffic monitor
Stars: ✭ 121 (-13.57%)
Mutual labels:  console-application, console
Tabtoy
高性能表格数据导出器
Stars: ✭ 1,302 (+830%)
Mutual labels:  json, csv
Kafka Connect Spooldir
Kafka Connect connector for reading CSV files into Kafka.
Stars: ✭ 116 (-17.14%)
Mutual labels:  json, csv

Highlights

Criterion is a micro-benchmarking library for modern C++.

  • Convenient static registration macros for setting up benchmarks
  • Parameterized benchmarks (e.g., vary input size)
  • Statistical analysis across multiple runs
  • Requires compiler support for C++17 or newer standard
  • Header-only library - single header file version available at single_include/
  • MIT License

Table of Contents

Getting Started

Let's say we have this merge sort implementation that needs to be benchmarked.

template<typename RandomAccessIterator, typename Compare>
void merge_sort(RandomAccessIterator first, RandomAccessIterator last,
                Compare compare, std::size_t size) {
  if (size < 2) return;
  auto middle = first + size / 2;
  merge_sort(first, middle, compare, size / 2);
  merge_sort(middle, last, compare, size - size/2);
  std::inplace_merge(first, middle, last, compare);
}

Simple Benchmark

Include <criterion/criterion.hpp> and you're good to go.

  • Use the BENCHMARK macro to declare a benchmark
  • Use SETUP_BENCHMARK and TEARDOWN_BENCHMARK to perform setup and teardown tasks
    • These tasks are not part of the measurement
#include <criterion/criterion.hpp>

BENCHMARK(MergeSort)
{
  SETUP_BENCHMARK(
    const auto size = 100;
    std::vector<int> vec(size, 0); // vector of size 100
  )
 
  // Code to be benchmarked
  merge_sort(vec.begin(), vec.end(), std::less<int>(), size);
  
  TEARDOWN_BENCHMARK(
    vec.clear();
  )
}

CRITERION_BENCHMARK_MAIN()

What if we want to run this benchmark on a variety of sizes?

Passing Arguments

  • The BENCHMARK macro can take typed parameters
  • Use GET_ARGUMENTS(n) to get the nth argument passed to the benchmark
  • For benchmarks that require arguments, use INVOKE_BENCHMARK_FOR_EACH and provide arguments
#include <criterion/criterion.hpp>

BENCHMARK(MergeSort, std::size_t) // <- one parameter to be passed to the benchmark
{
  SETUP_BENCHMARK(
    const auto size = GET_ARGUMENT(0); // <- get the argument passed to the benchmark
    std::vector<int> vec(size, 0);
  )
 
  // Code to be benchmarked
  merge_sort(vec.begin(), vec.end(), std::less<int>(), size);
  
  TEARDOWN_BENCHMARK(
    vec.clear();
  )
}

// Run the above benchmark for a number of inputs:

INVOKE_BENCHMARK_FOR_EACH(MergeSort,
  ("/10", 10),
  ("/100", 100),
  ("/1K", 1000),
  ("/10K", 10000),
  ("/100K", 100000)
)

CRITERION_BENCHMARK_MAIN()

Passing Arguments (Part 2)

Let's say we have the following struct and we need to create a std::shared_ptr to it.

struct Song {
  std::string artist;
  std::string title;
  Song(const std::string& artist_, const std::string& title_) :
    artist{ artist_ }, title{ title_ } {}
};

Here are two implementations for constructing the std::shared_ptr:

// Functions to be tested
auto Create_With_New() { 
  return std::shared_ptr<Song>(new Song("Black Sabbath", "Paranoid")); 
}

auto Create_With_MakeShared() { 
  return std::make_shared<Song>("Black Sabbath", "Paranoid"); 
}

We can setup a single benchmark that takes a std::function<> and measures performance like below.

BENCHMARK(ConstructSharedPtr, std::function<std::shared_ptr<Song>()>) 
{
  SETUP_BENCHMARK(
    auto test_function = GET_ARGUMENT(0);
  )

  // Code to be benchmarked
  auto song_ptr = test_function();
}

INVOKE_BENCHMARK_FOR_EACH(ConstructSharedPtr, 
  ("/new", Create_With_New),
  ("/make_shared", Create_With_MakeShared)
)

CRITERION_BENCHMARK_MAIN()

CRITERION_BENCHMARK_MAIN and Command-line Options

CRITERION_BENCHMARK_MAIN() provides a main function that:

  1. Handles command-line arguments,
  2. Runs the registered benchmarks
  3. Exports results to file if requested by user.

Here's the help/man generated by the main function:

[email protected]:~$ ./benchmarks -h

NAME
     ./benchmarks -- Run Criterion benchmarks

SYNOPSIS
     ./benchmarks
           [-w,--warmup <number>]
           [-l,--list] [--list_filtered <regex>] [-r,--run_filtered <regex>]
           [-e,--export_results {csv,json,md,asciidoc} <filename>]
           [-q,--quiet] [-h,--help]
DESCRIPTION
     This microbenchmarking utility repeatedly executes a list of benchmarks,
     statistically analyzing and reporting on the temporal behavior of the executed code.

     The options are as follows:

     -w,--warmup number
          Number of warmup runs (at least 1) to execute before the benchmark (default=3)

     -l,--list
          Print the list of available benchmarks

     --list_filtered regex
          Print a filtered list of available benchmarks (based on user-provided regex)

     -r,--run_filtered regex
          Run a filtered list of available benchmarks (based on user-provided regex)

     -e,--export_results format filename
          Export benchmark results to file. The following are the supported formats.

          csv       Comma separated values (CSV) delimited text file
          json      JavaScript Object Notation (JSON) text file
          md        Markdown (md) text file
          asciidoc  AsciiDoc (asciidoc) text file

     -q,--quiet
          Run benchmarks quietly, suppressing activity indicators

     -h,--help
          Print this help message

Exporting Results (csv, json, etc.)

Benchmarks can be exported to one of a number of formats: .csv, .json, .md, and .asciidoc.

Use --export_results (or -e) to export results to one of the supported formats.

[email protected]:~$ ./vector_sort -e json results.json -q # run quietly and export to JSON

[email protected]:~$ cat results.json
{
  "benchmarks": [
    {
      "name": "VectorSort/100",
      "warmup_runs": 2,
      "iterations": 2857140,
      "mean_execution_time": 168.70,
      "fastest_execution_time": 73.00,
      "slowest_execution_time": 88809.00,
      "lowest_rsd_execution_time": 84.05,
      "lowest_rsd_percentage": 3.29,
      "lowest_rsd_index": 57278,
      "average_iteration_performance": 5927600.84,
      "fastest_iteration_performance": 13698630.14,
      "slowest_iteration_performance": 11260.12
    },
    {
      "name": "VectorSort/1000",
      "warmup_runs": 2,
      "iterations": 2254280,
      "mean_execution_time": 1007.70,
      "fastest_execution_time": 640.00,
      "slowest_execution_time": 102530.00,
      "lowest_rsd_execution_time": 647.45,
      "lowest_rsd_percentage": 0.83,
      "lowest_rsd_index": 14098,
      "average_iteration_performance": 992355.48,
      "fastest_iteration_performance": 1562500.00,
      "slowest_iteration_performance": 9753.24
    },
    {
      "name": "VectorSort/10000",
      "warmup_runs": 2,
      "iterations": 259320,
      "mean_execution_time": 8833.26,
      "fastest_execution_time": 6276.00,
      "slowest_execution_time": 114548.00,
      "lowest_rsd_execution_time": 8374.15,
      "lowest_rsd_percentage": 0.11,
      "lowest_rsd_index": 7905,
      "average_iteration_performance": 113208.45,
      "fastest_iteration_performance": 159337.16,
      "slowest_iteration_performance": 8729.96
    }
  ]
}

Building Library and Samples

cmake -Hall -Bbuild
cmake --build build

# run `merge_sort` sample
./build/samples/merge_sort/merge_sort

Generating Single Header

python3 utils/amalgamate/amalgamate.py -c single_include.json -s .

Contributing

Contributions are welcome, have a look at the CONTRIBUTING.md document for more information.

License

The project is available under the MIT license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].