All Projects → symerio → neurtu

symerio / neurtu

Licence: BSD-3-Clause license
Interactive parametric benchmarks in Python

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to neurtu

Likwid
Performance monitoring and benchmarking suite
Stars: ✭ 957 (+6280%)
Mutual labels:  benchmarking, performance-analysis
best
🏆 Delightful Benchmarking & Performance Testing
Stars: ✭ 73 (+386.67%)
Mutual labels:  benchmarking, performance-analysis
benchmark-trend
Measure performance trends of Ruby code
Stars: ✭ 60 (+300%)
Mutual labels:  benchmarking, performance-analysis
Reproducible Image Denoising State Of The Art
Collection of popular and reproducible image denoising works.
Stars: ✭ 1,776 (+11740%)
Mutual labels:  benchmarking, performance-analysis
desim
A discrete-time events simulation framework, written in rust, using the generator experimental feature
Stars: ✭ 27 (+80%)
Mutual labels:  time
time machine
A date and time API for Dart
Stars: ✭ 120 (+700%)
Mutual labels:  time
gardenia
GARDENIA: Graph Analytics Repository for Designing Efficient Next-generation Accelerators
Stars: ✭ 22 (+46.67%)
Mutual labels:  benchmarking
benchkit
A developer-centric toolkit module for Android to facilitate in-depth profiling and benchmarking.
Stars: ✭ 48 (+220%)
Mutual labels:  benchmarking
PartialFunctions.jl
A small package to simplify partial function application
Stars: ✭ 34 (+126.67%)
Mutual labels:  lazy-evaluation
PerfAvore
Rule based performance analysis and monitoring tool for dotnet written in F#.
Stars: ✭ 12 (-20%)
Mutual labels:  performance-analysis
timedmap
A thread safe map which has expiring key-value pairs.
Stars: ✭ 49 (+226.67%)
Mutual labels:  time
scala-java-time
Implementation of the `java.time` API in scala. Especially useful for scala.js
Stars: ✭ 111 (+640%)
Mutual labels:  time
benchdb
A database and query tool for JMH benchmark results
Stars: ✭ 58 (+286.67%)
Mutual labels:  benchmarking
TimeContinuum
No description or website provided.
Stars: ✭ 28 (+86.67%)
Mutual labels:  time
island-time
A Kotlin Multiplatform library for working with dates and times
Stars: ✭ 69 (+360%)
Mutual labels:  time
snippet-timekeeper
An android library to measure code execution time. No need to remove the measurement code, automatically becomes no-op in the release variants. Does not compromise with the code readability and comes with features that enhance the developer experience.
Stars: ✭ 70 (+366.67%)
Mutual labels:  performance-analysis
TimesDates.jl
Nanosecond resolution for Time and Date, TimeZones
Stars: ✭ 28 (+86.67%)
Mutual labels:  time
datetime
A Go (golang) library for parsing most ISO8601 timestamps
Stars: ✭ 24 (+60%)
Mutual labels:  time
react-native-console-time-polyfill
console.time and console.timeEnd polyfill for react-native
Stars: ✭ 92 (+513.33%)
Mutual labels:  time
perf counter
A dedicated performance counter for Cortex-M systick. It shares the SysTick with users' original SysTick function without interfere it. This library will bring new functionalities, such as performance counter, delay_us and clock() service defined in time.h
Stars: ✭ 197 (+1213.33%)
Mutual labels:  performance-analysis

neurtu

pypi rdfd

travis appveyor codecov

Simple performance measurement tool

neurtu is a Python package providing a common interface for multi-metric benchmarks (including time and memory measurements). It can can be used to estimate time and space complexity of algorithms, while pandas integration allows quick analysis and visualization of the results.

neurtu means "to measure / evaluate" in Basque language.

See the documentation for more details.

Installation

neurtu requires 3.5+, it can be installed with,

pip install neurtu

pandas >=0.20 is an optional (but highly recommended) dependency.

Quickstart

To illustrate neurtu usage, will will benchmark array sorting in numpy. First, we will generator of cases,

import numpy as np
import neurtu

def cases()
    rng = np.random.RandomState(42)

    for N in [1000, 10000, 100000]:
        X = rng.rand(N)
        tags = {'N' : N}
        yield neurtu.delayed(X, tags=tags).sort()

that yields a sequence of delayed calculations, each tagged with the parameters defining individual runs.

We can evaluate the run time with,

>>> df = neurtu.timeit(cases())
>>> print(df)
        wall_time
N
1000     0.000014
10000    0.000134
100000   0.001474

which will internally use timeit module with a sufficient number of evaluation to work around the timer precision limitations (similarly to IPython's %timeit). It will also display a progress bar for long running benchmarks, and return the results as a pandas.DataFrame (if pandas is installed).

By default, all evaluations are run with repeat=1. If more statistical confidence is required, this value can be increased,

>>> neurtu.timeit(cases(), repeat=3)
       wall_time
            mean       max       std
N
1000    0.000012  0.000014  0.000002
10000   0.000116  0.000149  0.000029
100000  0.001323  0.001714  0.000339

In this case we will get a frame with a pandas.MultiIndex for columns, where the first level represents the metric name (wall_time) and the second -- the aggregation method. By default neurtu.timeit is called with aggregate=['mean', 'max', 'std'] methods, as supported by the pandas aggregation API. To disable, aggregation and obtains timings for individual runs, use aggregate=False. See neurtu.timeit documentation for more details.

To evaluate the peak memory usage, one can use the neurtu.memit function with the same API,

>>> neurtu.memit(cases(), repeat=3)
        peak_memory
               mean  max  std
N
10000           0.0  0.0  0.0
100000          0.0  0.0  0.0
1000000         0.0  0.0  0.0

More generally neurtu.Benchmark supports a wide number of evaluation metrics,

>>> bench = neurtu.Benchmark(wall_time=True, cpu_time=True, peak_memory=True)
>>> bench(cases())
         cpu_time  peak_memory  wall_time
N
10000    0.000100          0.0   0.000142
100000   0.001149          0.0   0.001680
1000000  0.013677          0.0   0.018347

including psutil process metrics.

For more information see the documentation and examples.

License

neurtu is released under the 3-clause BSD license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].