All Projects → lightbend → benchdb

lightbend / benchdb

Licence: Apache-2.0 license
A database and query tool for JMH benchmark results

Programming Languages

scala
5932 projects
HTML
75241 projects

Projects that are alternatives of or similar to benchdb

lein-jmh
Leiningen plugin for jmh-clojure
Stars: ✭ 17 (-70.69%)
Mutual labels:  benchmarking, jmh
perceptron-benchmark
Robustness benchmark for DNN models.
Stars: ✭ 61 (+5.17%)
Mutual labels:  benchmarking
dyngen
Simulating single-cell data using gene regulatory networks 📠
Stars: ✭ 59 (+1.72%)
Mutual labels:  benchmarking
plf nanotimer
A simple C++ 03/11/etc timer class for ~microsecond-precision cross-platform benchmarking. The implementation is as limited and as simple as possible to create the lowest amount of overhead.
Stars: ✭ 108 (+86.21%)
Mutual labels:  benchmarking
sbt-graphql
SBT plugin to generate and validate graphql schemas written with Sangria
Stars: ✭ 94 (+62.07%)
Mutual labels:  sbt-plugin
benchmark-thrift
An open source application designed to load test Thrift applications
Stars: ✭ 41 (-29.31%)
Mutual labels:  benchmarking
nitroml
NitroML is a modular, portable, and scalable model-quality benchmarking framework for Machine Learning and Automated Machine Learning (AutoML) pipelines.
Stars: ✭ 40 (-31.03%)
Mutual labels:  benchmarking
sbt-swagger-2
sbt plugin for generating Swagger JSON schemas during build
Stars: ✭ 13 (-77.59%)
Mutual labels:  sbt-plugin
betsy
betsy (BPEL/BPMN Engine Test System) - A BPEL/BPMN Conformance Test Suite and Tool
Stars: ✭ 20 (-65.52%)
Mutual labels:  benchmarking
ILAMB
Python software used in the International Land Model Benchmarking (ILAMB) project
Stars: ✭ 28 (-51.72%)
Mutual labels:  benchmarking
sbt-findbugs
FindBugs static analysis plugin for sbt.
Stars: ✭ 47 (-18.97%)
Mutual labels:  sbt-plugin
ldbc snb docs
Specification of the LDBC Social Network Benchmark suite
Stars: ✭ 39 (-32.76%)
Mutual labels:  benchmarking
bench-show
Show, plot and compare benchmark results
Stars: ✭ 14 (-75.86%)
Mutual labels:  benchmarking
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+1987.93%)
Mutual labels:  benchmarking
benchkit
A developer-centric toolkit module for Android to facilitate in-depth profiling and benchmarking.
Stars: ✭ 48 (-17.24%)
Mutual labels:  benchmarking
sbt-sass
A fork of the sbt-sass repository which seems to be abandoned.
Stars: ✭ 32 (-44.83%)
Mutual labels:  sbt-plugin
gatling-sbt-plugin
Gatling Plugin for SBT
Stars: ✭ 105 (+81.03%)
Mutual labels:  sbt-plugin
sbt-rewarn
Make sbt always display compilation warnings, even for unchanged files.
Stars: ✭ 42 (-27.59%)
Mutual labels:  sbt-plugin
gardenia
GARDENIA: Graph Analytics Repository for Designing Efficient Next-generation Accelerators
Stars: ✭ 22 (-62.07%)
Mutual labels:  benchmarking
php-bench
⏰ Tools for benchmark PHP algorithms.
Stars: ✭ 32 (-44.83%)
Mutual labels:  benchmarking

benchdb - A database and query tool for JMH results

When you run benchmarks with JMH you usually look at the results table printed after the run or maybe generate a JSON file and feed it into JMH Visualizer for immediate consumption. This approach does not scale well when you benchmark lots of different changes or want to compare historical data or visualize more complex benchmark results graphically.

benchdb takes a JMH result file plus some captured environment data (platform, Java environment, git data for git-based projects) and stores it in a relational database of your choice. You can later list and retrieve these results and run queries over a single result or combining multiple results. Thanks to the stored environment data you always know when, what and where you ran a benchmark.

Installation (PRELIMINARY)

  • Clone benchdb and run sbt core/publishLocal.

  • Create the command line app with Coursier, e.g.:

    > cs bootstrap com.lightbend.benchdb::benchdb-core:latest.release com.h2database:h2:1.4.200 -o benchdb
    

    Note that the second dependency is the database JDBC driver, in this case an H2 embedded database which is recommended for local use on a single machine. You should replace this with the database driver of your choice (e.g. mysql:mysql-connector-java:8.0.19 for MySQL/MariaDB). benchdb doesn't use any advanced database features. Any database supported by Slick should work.

  • In order to use an embedded H2 database you can run benchdb create-config to create a default configuration file .benchdb.conf in your home directory. benchdb config files use HOCON syntax for Typesafe Config. You can also specify a path for additional configurations or ignore the user config file with command line options.

    For a multi-machine setup with a database server, you need to create the configuration manually. It needs to contain at least a DatabaseConfig for Slick under the name db. Here is an example for connecting to a MySQL/MariaDB server:

    db {
      profile = "slick.jdbc.MySQLProfile$"
      db {
        connectionPool = disabled
        dataSourceClass = "com.mysql.cj.jdbc.MysqlDataSource"
        properties = {
          serverName = "hostname"
          portNumber = "3307"
          databaseName = "benchdb"
          user = "benchdb"
          password = "password"
          serverTimezone = "UTC"
        }
      }
    }
    
  • Run benchdb init-db --force to initialize the database schema.

Usage

  • benchdb --help shows the list of supported commands. Specifying a command name followed by --help shows further options and parameters for that command.

  • First you need to insert some benchmark results into the database, e.g.:

    > benchdb insert-run --project-dir ../scala --msg "Test 1" --jmh-args "-wm bulk" ../scala/test/benchmarks/jmh-result.json
    

    The specified project directory (default: current directory) is used to determine git environment data.

  • benchdb list lists the benchmark runs in the database, e.g.:

    > benchdb list --git-data
    /----+---------------------+----------+---------+---------------------+----------------------------------+------------------------------------\
    | ID | Timestamp           | Msg      | Git SHA | Git Timestamp       | Git Origin                       | Git Upstream                       |
    |----+---------------------+----------+---------+---------------------+----------------------------------+------------------------------------|
    |  4 | 2020-03-24 14:15:15 | Test 3   | d335189 | 2020-03-09 16:06:23 | [email protected]:szeiger/scala.git | https://github.com/scala/scala.git |
    |  3 | 2020-03-24 14:14:57 | Test 3   | d335189 | 2020-03-09 16:06:23 | [email protected]:szeiger/scala.git | https://github.com/scala/scala.git |
    |  2 | 2020-03-24 13:18:38 | Test 2   | d335189 | 2020-03-09 16:06:23 | [email protected]:szeiger/scala.git | https://github.com/scala/scala.git |
    |  1 | 2020-03-24 13:07:30 | Test job | d335189 | 2020-03-09 16:06:23 | [email protected]:szeiger/scala.git | https://github.com/scala/scala.git |
    \----+---------------------+----------+---------+---------------------+----------------------------------+------------------------------------/
    4 test runs found.
    
  • benchdb results generates a table similar to the one produced by JMH itself. You can specify one or more run IDs to show (defaulting to the latest run if no ID is given) and also filter benchmark names with glob patterns:

    > benchdb results -r1 -b*100p*
    /----------------------------------------------------------+--------+------+-----+------------+----------+-------\
    | Benchmark                                                | (size) | Mode | Cnt |      Score |    Error | Units |
    |----------------------------------------------------------+--------+------+-----+------------+----------+-------|
    | scala.collection.immutable.VectorBenchmark2.nvFilter100p |      1 | avgt |  20 |      9.046 |    0.073 | ns/op |
    | scala.collection.immutable.VectorBenchmark2.nvFilter100p |     10 | avgt |  20 |     11.558 |    0.099 | ns/op |
    | scala.collection.immutable.VectorBenchmark2.nvFilter100p |    100 | avgt |  20 |    503.222 |   11.211 | ns/op |
    | scala.collection.immutable.VectorBenchmark2.nvFilter100p |   1000 | avgt |  20 |   6163.309 |  278.645 | ns/op |
    | scala.collection.immutable.VectorBenchmark2.nvFilter100p |  10000 | avgt |  20 |  41181.090 | 1407.833 | ns/op |
    | scala.collection.immutable.VectorBenchmark2.nvFilter100p |  50000 | avgt |  20 | 195477.388 | 4077.424 | ns/op |
    \----------------------------------------------------------+--------+------+-----+------------+----------+-------/
    

Secondary metrics can be used in place of the primary metric with -metric, (for instance, -metric ·gc.alloc.rate.norm shows the the per-operation allocations recorded by JMH's -prof gc.

  • Extractor patterns can be used to extract additional parameters from benchmark names. They are glob patterns with regular expression-like capture groups. Unnamed groups are discarded, named groups are extracted into parameters. For example:

    > benchdb results -r1 --extract (*2.)nvFilter(percent=*)p
    /--------------------+--------+-----------+------+-----+------------+----------+-------\
    | Benchmark          | (size) | (percent) | Mode | Cnt |      Score |    Error | Units |
    |--------------------+--------+-----------+------+-----+------------+----------+-------|
    | nvFilter(percent)p |      1 |         0 | avgt |  20 |      8.824 |    0.381 | ns/op |
    | nvFilter(percent)p |     10 |         0 | avgt |  20 |      8.902 |    0.075 | ns/op |
    | nvFilter(percent)p |    100 |         0 | avgt |  20 |     40.800 |    0.544 | ns/op |
    | nvFilter(percent)p |   1000 |         0 | avgt |  20 |    134.629 |    1.996 | ns/op |
    | nvFilter(percent)p |  10000 |         0 | avgt |  20 |   1712.683 |   31.730 | ns/op |
    | nvFilter(percent)p |  50000 |         0 | avgt |  20 |   8186.502 |  130.088 | ns/op |
    | nvFilter(percent)p |      1 |       100 | avgt |  20 |      9.046 |    0.073 | ns/op |
    | nvFilter(percent)p |     10 |       100 | avgt |  20 |     11.558 |    0.099 | ns/op |
    | nvFilter(percent)p |    100 |       100 | avgt |  20 |    503.222 |   11.211 | ns/op |
    | nvFilter(percent)p |   1000 |       100 | avgt |  20 |   6163.309 |  278.645 | ns/op |
    | nvFilter(percent)p |  10000 |       100 | avgt |  20 |  41181.090 | 1407.833 | ns/op |
    | nvFilter(percent)p |  50000 |       100 | avgt |  20 | 195477.388 | 4077.424 | ns/op |
    | nvFilter(percent)p |      1 |        50 | avgt |  20 |     13.321 |    0.302 | ns/op |
    | nvFilter(percent)p |     10 |        50 | avgt |  20 |     62.017 |    1.439 | ns/op |
    | nvFilter(percent)p |    100 |        50 | avgt |  20 |    595.161 |   34.352 | ns/op |
    | nvFilter(percent)p |   1000 |        50 | avgt |  20 |   5951.751 |   56.474 | ns/op |
    | nvFilter(percent)p |  10000 |        50 | avgt |  20 |  43305.200 | 1221.533 | ns/op |
    | nvFilter(percent)p |  50000 |        50 | avgt |  20 | 196567.912 | 8389.588 | ns/op |
    \--------------------+--------+-----------+------+-----+------------+----------+-------/
    
  • You can then pivot one or more parameters to compare their results side by side:

    > benchdb results -r1 --extract (*2.)nvFilter(percent=*)p --pivot percent
    /--------------------+--------+------+-----+----------+---------+------------+----------+------------+----------+-------\
    |          (percent) |        |      |     |         0          |          50           |          100          |       |
    | Benchmark          | (size) | Mode | Cnt |    Score |   Error |      Score |    Error |      Score |    Error | Units |
    |--------------------+--------+------+-----+----------+---------+------------+----------+------------+----------+-------|
    | nvFilter(percent)p |      1 | avgt |  20 |    8.824 |   0.381 |     13.321 |    0.302 |      9.046 |    0.073 | ns/op |
    | nvFilter(percent)p |     10 | avgt |  20 |    8.902 |   0.075 |     62.017 |    1.439 |     11.558 |    0.099 | ns/op |
    | nvFilter(percent)p |    100 | avgt |  20 |   40.800 |   0.544 |    595.161 |   34.352 |    503.222 |   11.211 | ns/op |
    | nvFilter(percent)p |   1000 | avgt |  20 |  134.629 |   1.996 |   5951.751 |   56.474 |   6163.309 |  278.645 | ns/op |
    | nvFilter(percent)p |  10000 | avgt |  20 | 1712.683 |  31.730 |  43305.200 | 1221.533 |  41181.090 | 1407.833 | ns/op |
    | nvFilter(percent)p |  50000 | avgt |  20 | 8186.502 | 130.088 | 196567.912 | 8389.588 | 195477.388 | 4077.424 | ns/op |
    \--------------------+--------+------+-----+----------+---------+------------+----------+------------+----------+-------/
    
  • benchdb chart generates line charts (using the Google Charts library). The parameters are the same as for results. Charts require a single free parameter which must be Long-valued (i.e. all instances can be parsed into a Long -- the actual types of the benchmark parameters are not preserved by JMH; it stores everything as a string), like size in the example above. In case of pivoted results, all pivoted columns are rendered together as individual series in a single chart. The result of benchdb chart is a single, self-contained HTML file. If no output file is specified, it is written to a temporary file and opened in the default browser.

Maintenance notes

benchdb is NOT supported under the Lightbend subscription.

Contributions to this project are very welcome!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].