All Projects → BrianHicks → elm-benchmark

BrianHicks / elm-benchmark

Licence: BSD-3-Clause license
Benchmarking for Elm

Programming Languages

elm
856 projects
javascript
184084 projects - #8 most used programming language
Makefile
30231 projects

Projects that are alternatives of or similar to elm-benchmark

php-orm-benchmark
The benchmark to compare performance of PHP ORM solutions.
Stars: ✭ 82 (+70.83%)
Mutual labels:  benchmarking
graphsim
R package: Simulate Expression data from igraph network using mvtnorm (CRAN; JOSS)
Stars: ✭ 16 (-66.67%)
Mutual labels:  benchmarking
load-testing-toolkit
Collection of open-source tools for debugging, benchmarking, load and stress testing your code or services.
Stars: ✭ 65 (+35.42%)
Mutual labels:  benchmarking
language-benchmarks
A simple benchmark system for compiled and interpreted languages.
Stars: ✭ 21 (-56.25%)
Mutual labels:  benchmarking
ezab
A suite of tools for benchmarking (load testing) web servers and databases
Stars: ✭ 16 (-66.67%)
Mutual labels:  benchmarking
ake-datasets
Large, curated set of benchmark datasets for evaluating automatic keyphrase extraction algorithms.
Stars: ✭ 125 (+160.42%)
Mutual labels:  benchmarking
awesome-locust
A collection of resources covering different aspects of Locust load testing tool usage.
Stars: ✭ 40 (-16.67%)
Mutual labels:  benchmarking
blockchain-load-testing
Code for load testing the Stellar network.
Stars: ✭ 36 (-25%)
Mutual labels:  benchmarking
mzbench
Distributed Benchmarking
Stars: ✭ 39 (-18.75%)
Mutual labels:  benchmarking
benchmark-trend
Measure performance trends of Ruby code
Stars: ✭ 60 (+25%)
Mutual labels:  benchmarking
PiBenchmarks
Raspberry Pi benchmarking scripts featuring a storage benchmark with score
Stars: ✭ 69 (+43.75%)
Mutual labels:  benchmarking
forest-benchmarking
A library for quantum characterization, verification, validation (QCVV), and benchmarking using pyQuil.
Stars: ✭ 41 (-14.58%)
Mutual labels:  benchmarking
beapi-bench
Tool for benchmarking apis. Uses ApacheBench(ab) to generate data and gnuplot for graphing. Adding new features almost daily
Stars: ✭ 16 (-66.67%)
Mutual labels:  benchmarking
mrs testbed
Multi-robot Exploration Testbed
Stars: ✭ 26 (-45.83%)
Mutual labels:  benchmarking
reframe
A powerful Python framework for writing and running portable regression tests and benchmarks for HPC systems.
Stars: ✭ 154 (+220.83%)
Mutual labels:  benchmarking
neurtu
Interactive parametric benchmarks in Python
Stars: ✭ 15 (-68.75%)
Mutual labels:  benchmarking
LuaJIT-Benchmarks
LuaJIT Benchmark tests
Stars: ✭ 20 (-58.33%)
Mutual labels:  benchmarking
immuneML
immuneML is a platform for machine learning analysis of adaptive immune receptor repertoire data.
Stars: ✭ 41 (-14.58%)
Mutual labels:  benchmarking
grandma
👵 fully programmable stress testing framework
Stars: ✭ 20 (-58.33%)
Mutual labels:  benchmarking
perf check
PERRRFFF CHERRRRK!
Stars: ✭ 16 (-66.67%)
Mutual labels:  benchmarking

Elm Benchmark

This repo is moving to elm-explorations/benchmark. Open new issues and send PRs there, please!

Build Status

Run microbenchmarks in Elm.

Table of Contents

Quick Start

Here's a sample, benchmarking Array.Hamt.

import Array
import Arry.Hamt as Hamt
import Benchmark exposing (..)


suite : Benchmark
suite =
    let
        sampleArray =
            Hamt.initialize 100 identity
    in
    describe "Array.Hamt"
        [ -- nest as many descriptions as you like
          describe "slice"
            [ benchmark "from the beginning" <|
                \_ -> Hamt.slice 50 100 sampleArray
            , benchmark "from the end" <|
                \_ -> Hamt.slice 0 50 sampleArray
            ]

        -- compare the results of two benchmarks
        , Benchmark.compare "initialize"
            "HAMT"
            (\_ -> Hamt.initialize 100 identity)
            "core"
            (\_ -> Array.initialize 100 identity)
        ]

This code uses a few common functions:

  • describe to organize benchmarks
  • benchmark to run benchmarks
  • compare to compare the results of two benchmarks

For a more thorough overview, I've written an introduction to elm-benchmark.

Installing

You should keep your benchmarks separate from your code since you don't want the elm-benchmark code in your production artifacts. This is necessary because of how elm-package works; it may change in the future. Here are the commands (with explanation) that you should run to get started:

mkdir benchmarks                             # create a benchmarks directory
cd benchmarks                                # go into that directory
elm package install BrianHicks/elm-benchmark # get this project, including the browser runner

You'll also need to add your main source directory (probably ../ or ../src) to the source-directories list in benchmarks/elm-package.json. If you don't do this, you won't be able to import the code you're benchmarking!

Running Benchmarks in the Browser

Benchmark.Runner provides program, which takes a Benchmark and runs it in the browser. To run the sample above, you would do:

import Benchmark.Runner exposing (BenchmarkProgram, program)


main : BenchmarkProgram
main =
    program suite

Compile and open in your browser to start the benchmarking run.

Writing Effective Benchmarks

Some general principles:

  • Don't compare raw values from different machines.
  • When you're working on speeding up a function, keep the old implementation around and use compare to measure your progress.
  • "As always, if you see numbers that look wildly out of whack, you shouldn’t rejoice that you have magically achieved fast performance—be skeptical and investigate!" – Bryan O'Sullivan

FAQ

What Does Goodness of Fit Mean?

Goodness of fit is a measurement of how well our prediction fits the measurements we have collected. You want this to be as close to 100% as possible. In elm-benchmark:

  • 99% is great
  • 95% is okay
  • 90% is not great, consider closing other programs on your computer and re-running
  • 80% and below, the result has been highly affected by outliers. Please do not trust the results when goodness of fit is this low.

elm-benchmark will eventually incorporate this advice into the reporting interface. See Issue #13.

For more, see Wikipedia: Goodness of Fit.

Why Are My Benchmarks Running In Parallel?

They're not, but they look like it because we interleave runs and only update the UI after collecting one of each. Keep reading for more on why we do this!

How Are My Benchmarks Measured?

When we measure the speed of your code, we take the following steps:

  1. We warm up the JIT so that we can get a good measurement.
  2. We measure how many runs of the function will fit into a small but measurable timespan.
  3. We collect multiples of this amount, until we have enough to create a trend. We can do this because running a benchmark twice should take twice as long as running it once, so we can create a reliable prediction by splitting sample sizes among a number of buckets.
  4. Once we have enough we present our prediction of runs per second on your computer, in this configuration, now. We try to be as consistent as possible, but be aware that the environment matters a lot!

If the run contains multiple benchmarks, we interleave sampling between them. This means that given three benchmarks we would take one sample of each and continue in that pattern until they were complete.

We do this because the system might be busy with other work when running the first, but give its full attention to the second and third. This would make one artificially slower than the others, so we would get misleading data!

By interleaving samples, we spread this offset among all the benchmarks. It sets a more even playing field for all the benchmarks, and gives us better data.

Inspirations and Thanks

License

elm-benchmark is licensed under a 3-Clause BSD License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].