All Projects → benchopt → benchopt

benchopt / benchopt

Licence: BSD-3-Clause license
Making your benchmark of optimization algorithms simple and open

Programming Languages

python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language
CSS
56736 projects
HTML
75241 projects
r
7636 projects
shell
77523 projects

Projects that are alternatives of or similar to benchopt

FirstOrderSolvers.jl
Large scale convex optimization solvers in julia
Stars: ✭ 20 (-77.53%)
Mutual labels:  optimization-algorithms, convex-optimization
GurobiLink
Wolfram Language interface to the Gurobi numerical optimization library
Stars: ✭ 16 (-82.02%)
Mutual labels:  optimization-algorithms, convex-optimization
gibbous
Convex optimization for java and scala, built on Apache Commons Math
Stars: ✭ 17 (-80.9%)
Mutual labels:  optimization-algorithms, convex-optimization
portfolio allocation js
A JavaScript library to allocate and optimize financial portfolios.
Stars: ✭ 145 (+62.92%)
Mutual labels:  optimization-algorithms, convex-optimization
Deep-Learning-Optimization-Algorithms
Visualization of various deep learning optimization algorithms using PyTorch automatic differentiation and optimizers.
Stars: ✭ 47 (-47.19%)
Mutual labels:  optimization-algorithms, convex-optimization
graphics
A place where miscellaneous Julia-related graphics are stored. If you're looking for the official Julia logos and related graphics, visit https://github.com/juliaLang/julia-logo-graphics.
Stars: ✭ 16 (-82.02%)
Mutual labels:  julialang
SeaPearl.jl
Julia hybrid constraint programming solver enhanced by a reinforcement learning driven search.
Stars: ✭ 119 (+33.71%)
Mutual labels:  julialang
paradiseo
An evolutionary computation framework to (automatically) build fast parallel stochastic optimization solvers
Stars: ✭ 73 (-17.98%)
Mutual labels:  optimization-algorithms
Intervals.jl
Non-iterable ranges
Stars: ✭ 29 (-67.42%)
Mutual labels:  julialang
neural-net-optimization
PyTorch implementations of recent optimization algorithms for deep learning.
Stars: ✭ 59 (-33.71%)
Mutual labels:  optimization-algorithms
IterTools.jl
Common functional iterator patterns
Stars: ✭ 124 (+39.33%)
Mutual labels:  julialang
SortingLab.jl
Faster sorting algorithms (sort and sortperm) for Julia
Stars: ✭ 20 (-77.53%)
Mutual labels:  julialang
PWDFT.jl
Plane wave density functional theory using Julia programming language
Stars: ✭ 86 (-3.37%)
Mutual labels:  julialang
convex-optimization-class
APPM 5630 at CU Boulder
Stars: ✭ 30 (-66.29%)
Mutual labels:  convex-optimization
psopy
A SciPy compatible super fast Python implementation for Particle Swarm Optimization.
Stars: ✭ 33 (-62.92%)
Mutual labels:  optimization-algorithms
AuxiLearn
Official implementation of Auxiliary Learning by Implicit Differentiation [ICLR 2021]
Stars: ✭ 71 (-20.22%)
Mutual labels:  optimization-algorithms
ECharts.jl
Julia package for the Apache ECharts v4 visualization library
Stars: ✭ 80 (-10.11%)
Mutual labels:  julialang
MIRT.jl
MIRT: Michigan Image Reconstruction Toolbox (Julia version)
Stars: ✭ 80 (-10.11%)
Mutual labels:  optimization-algorithms
optaplanner-quickstarts
OptaPlanner quick starts for AI optimization: many use cases shown in many different technologies.
Stars: ✭ 226 (+153.93%)
Mutual labels:  optimization-algorithms
DynamicalBilliards.jl
An easy-to-use, modular, extendable and absurdly fast Julia package for dynamical billiards in two dimensions.
Stars: ✭ 97 (+8.99%)
Mutual labels:  julialang

Benchmark repository for optimization

Test Status Python 3.6+ codecov

BenchOpt is a benchmarking suite for optimization algorithms. It is built for simplicity, transparency, and reproducibility.

Benchopt is implemented in Python, and can run algorithms written in many programming languages (example). So far, Benchopt has been tested with Python, R, Julia and C/C++ (compiled binaries with a command line interface). Programs available via conda should be compatible.

BenchOpt is run through a command line interface as described in the API Documentation. Replicating an optimization benchmark should be as simple as doing:

conda create -n benchopt python
conda activate benchopt
pip install benchopt
git clone https://github.com/benchopt/benchmark_logreg_l2
cd benchmark_logreg_l2
benchopt install -e . -s lightning -s sklearn
benchopt run -e . --config ./config_example.yml

Running this command will give you a benchmark plot on l2-regularized logistic regression:

https://benchopt.github.io/_images/sphx_glr_plot_run_benchmark_001.png

See the Available optimization problems below.

Learn how to create a new benchmark using the benchmark template.

Install

The command line tool to run the benchmarks can be installed through pip. In order to allow benchopt to automatically install solvers dependencies, the install needs to be done in a conda environment.

conda create -n benchopt python
conda activate benchopt

To get the latest release, use:

pip install benchopt

To get the latest development version, use:

pip install -U -i https://test.pypi.org/simple/ benchopt

Then, existing benchmarks can be retrieved from git or created locally. For instance, the benchmark for Lasso can be retrieved with:

git clone https://github.com/benchopt/benchmark_lasso

Command line interface

The preferred way to run the benchmarks is through the command line interface. To run the Lasso benchmark on all datasets and with all solvers, run:

benchopt run --env ./benchmark_lasso

To get more details about the different options, run:

benchopt run -h

or read the CLI documentation.

Benchopt also provides a Python API described in the API documentation.

Available optimization problems

Problem Results Build Status
Ordinary Least Squares (OLS) Results Build Status OLS
Non-Negative Least Squares (NNLS) Results Build Status NNLS
LASSO: L1-Regularized Least Squares Results Build Status Lasso
LASSO Path Results Build Status Lasso Path
Elastic Net   Build Status ElasticNet
MCP Results Build Status MCP
L2-Regularized Logistic Regression Results Build Status LogRegL2
L1-Regularized Logistic Regression Results Build Status LogRegL1
L2-regularized Huber regression   Build Status HuberL2
L1-Regularized Quantile Regression Results Build Status QuantileRegL1
Linear SVM for Binary Classification   Build Status LinearSVM
Linear ICA   Build Status LinearICA
Approximate Joint Diagonalization (AJD)   Build Status JointDiag
1D Total Variation Denoising   Build Status TV1D
2D Total Variation Denoising   Build Status TV2D
ResNet Classification Results Build Status ResNetClassif

Citing Benchopt

If you use Benchopt in a scientific publication, please cite the following paper

@article{benchopt,
   author = {Moreau, Thomas and Massias, Mathurin and Gramfort, Alexandre and Ablin, Pierre
             and Bannier, Pierre-Antoine and Charlier, Benjamin and Dagréou, Mathieu and Dupré la Tour, Tom
             and Durif, Ghislain and F. Dantas, Cassio and Klopfenstein, Quentin
             and Larsson, Johan and Lai, En and Lefort, Tanguy and Malézieux, Benoit
             and Moufad, Badr and T. Nguyen, Binh and Rakotomamonjy, Alain and Ramzi, Zaccharie
             and Salmon, Joseph and Vaiter, Samuel},
   title  = {Benchopt: Reproducible, efficient and collaborative optimization benchmarks},
   year   = {2022},
   url    = {https://arxiv.org/abs/2206.13424}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].