All Projects → AdrienTaylor → Performance-Estimation-Toolbox

AdrienTaylor / Performance-Estimation-Toolbox

Licence: MIT License
Code of the Performance Estimation Toolbox (PESTO) whose aim is to ease the access to the PEP methodology for performing worst-case analyses of first-order methods in convex and nonconvex optimization. The numerical worst-case analyses from PEP can be performed just by writting the algorithms just as you would implement them.

Programming Languages

matlab
3953 projects
TeX
3793 projects

Projects that are alternatives of or similar to Performance-Estimation-Toolbox

PEPit
PEPit is a package enabling computer-assisted worst-case analyses of first-order optimization methods.
Stars: ✭ 41 (+7.89%)
Mutual labels:  semidefinite-programming, worst-case-analyses, first-order-methods
FrankWolfe.jl
Julia implementation for various Frank-Wolfe and Conditional Gradient variants
Stars: ✭ 47 (+23.68%)
Mutual labels:  optimization-algorithms, first-order-methods
irace
Iterated Racing for Automatic Algorithm Configuration
Stars: ✭ 26 (-31.58%)
Mutual labels:  optimization-algorithms
ML-Optimizers-JAX
Toy implementations of some popular ML optimizers using Python/JAX
Stars: ✭ 37 (-2.63%)
Mutual labels:  optimization-algorithms
zoofs
zoofs is a python library for performing feature selection using a variety of nature-inspired wrapper algorithms. The algorithms range from swarm-intelligence to physics-based to Evolutionary. It's easy to use , flexible and powerful tool to reduce your feature size.
Stars: ✭ 142 (+273.68%)
Mutual labels:  optimization-algorithms
csmath-2021
This mathematics course is taught for the first year Ph.D. students of computer science and related areas @zju
Stars: ✭ 30 (-21.05%)
Mutual labels:  optimization-algorithms
adaboost
An implementation of the paper "A Short Introduction to Boosting"
Stars: ✭ 20 (-47.37%)
Mutual labels:  optimization-algorithms
VariationalNeuralAnnealing
A variational implementation of classical and quantum annealing using recurrent neural networks for the purpose of solving optimization problems.
Stars: ✭ 21 (-44.74%)
Mutual labels:  optimization-algorithms
FirstOrderSolvers.jl
Large scale convex optimization solvers in julia
Stars: ✭ 20 (-47.37%)
Mutual labels:  optimization-algorithms
geneal
A genetic algorithm implementation in python
Stars: ✭ 47 (+23.68%)
Mutual labels:  optimization-algorithms
car-racing
A toolkit for testing control and planning algorithm for car racing.
Stars: ✭ 30 (-21.05%)
Mutual labels:  optimization-algorithms
ProxSDP.jl
Semidefinite programming optimization solver
Stars: ✭ 69 (+81.58%)
Mutual labels:  semidefinite-programming
Ascension
A metaheuristic optimization framework
Stars: ✭ 24 (-36.84%)
Mutual labels:  optimization-algorithms
nuxt-prune-html
🔌⚡ Nuxt module to prune html before sending it to the browser (it removes elements matching CSS selector(s)), useful for boosting performance showing a different HTML for bots/audits by removing all the scripts with dynamic rendering
Stars: ✭ 69 (+81.58%)
Mutual labels:  optimization-algorithms
HashedExpression
Type-safe modelling DSL, symbolic transformation, and code generation for solving optimization problems.
Stars: ✭ 40 (+5.26%)
Mutual labels:  optimization-algorithms
hyper-engine
Python library for Bayesian hyper-parameters optimization
Stars: ✭ 80 (+110.53%)
Mutual labels:  optimization-algorithms
SGDLibrary
MATLAB/Octave library for stochastic optimization algorithms: Version 1.0.20
Stars: ✭ 165 (+334.21%)
Mutual labels:  optimization-algorithms
LaplacianOpt.jl
A Julia/JuMP Package for Maximizing Algebraic Connectivity of Undirected Weighted Graphs
Stars: ✭ 16 (-57.89%)
Mutual labels:  semidefinite-programming
Learning-Lab-C-Library
This library provides a set of basic functions for different type of deep learning (and other) algorithms in C.This deep learning library will be constantly updated
Stars: ✭ 20 (-47.37%)
Mutual labels:  optimization-algorithms
a-tour-of-pytorch-optimizers
A tour of different optimization algorithms in PyTorch.
Stars: ✭ 46 (+21.05%)
Mutual labels:  optimization-algorithms

Performance-Estimation-Toolbox (PESTO)

This code comes jointly with the following reference.

[1] Taylor, Adrien B., Julien M. Hendrickx, and François Glineur. "Performance Estimation Toolbox (PESTO): automated worst-case analysis of first-order optimization methods." Proceedings of the 56th IEEE Conference on Decision and Control (CDC 2017).

Date: May 2021

Version: May 2021

Authors & Contributors

Performance Estimation Framework

The toolbox implements the performance estimation approach as developped in the following works:

[2] Taylor, Adrien B., Julien M. Hendrickx, and François Glineur. "Smooth strongly convex interpolation and exact worst-case performance of first-order methods." Mathematical Programming 161.1-2 (2017): 307-345.

[3] Taylor, Adrien B., Julien M. Hendrickx, and François Glineur. "Exact worst-case performance of first-order methods for composite convex optimization." SIAM Journal on Optimization 27.3 (2017): 1283-1313

Note that the approach of using semidefinite programming for obtaining worst-case guarantees was originally introduced in

[4] Drori, Yoel, and Marc Teboulle. "Performance of first-order methods for smooth convex minimization: a novel approach." Mathematical Programming 145.1-2 (2014): 451-482

Introduction to the toolbox

The document PESTO_CDC2017_FINAL contains the reference paper for the toolbox. This paper contains a simplified and quick general introduction to the theory underlying the toolbox, and to its use.

The document UserGuide contains more detailed information and examples about the use of the toolbox.

The general purpose of the toolbox is to help the researchers producing worst-case guarantees for their favorite first-order methods.

Setup

Note: This code requires YALMIP along with a suitable SDP solver (e.g., Sedumi, SDPT3, Mosek).

Setup with dependencies: guidelines can be found on the wiki

Once YALMIP and the SDP solver installed (type 'yalmiptest' for checking the installation went fine); the toolbox can simply be installed by typing

Install_PESTO

in the Matlab prompt (which only adds the required folders to your Matlab path). You can now execute the demo files for a step by step introduction to the toolbox.

Numerical efficiency Note that it is in general not reasonable to solve 'pure' PEPs (not involving any relaxation) with more than 300 iterations (in the base case: unconstrained minimization), see our numerical tests on the wiki.

Run all examples You can run all examples by typing

test_install

in the matlab prompt. Alternatively, the command

test_install(1)

allows to go a bit more quietly through them.

Example

The folder Examples contains numerous introductory examples to the toolbox.

The files demo summarizes the currently available step-by-step demonstration files and their targets. As an example, the first demonstration file can be executed by typing

demo1

in the prompt.

Among the other examples, the following code (see GradientMethod) generates a worst-case scenario for the gradient method applied to a smooth (possibly strongly) convex function.

% In this example, we use a fixed-step gradient method for
% solving the L-smooth convex minimization problem
%   min_x F(x); for notational convenience we denote xs=argmin_x F(x).
%
% We show how to compute the worst-case value of F(xN)-F(xs) when xN is
% obtained by doing N steps of the gradient method starting with an initial
% iterate satisfying ||x0-xs||<=1.


% (0) Initialize an empty PEP
P=pep();

% (1) Set up the objective function
param.mu=0;	% Strong convexity parameter
param.L=1;      % Smoothness parameter

F=P.DeclareFunction('SmoothStronglyConvex',param); % F is the objective function

% (2) Set up the starting point and initial condition
x0=P.StartingPoint();		  % x0 is some starting point
[xs,fs]=F.OptimalPoint(); 	  % xs is an optimal point, and fs=F(xs)
P.InitialCondition((x0-xs)^2<=1); % Add an initial condition ||x0-xs||^2<= 1

% (3) Algorithm
h=1/param.L;		% step size
N=10;		% number of iterations

x=x0;
for i=1:N
    x=x-h*F.gradient(x);
end
xN=x;

% (4) Set up the performance measure
fN=F.value(xN);                % g=grad F(x), f=F(x)
P.PerformanceMetric(fN-fs); % Worst-case evaluated as F(x)-F(xs)

% (5) Solve the PEP
P.solve()

% (6) Evaluate the output
double(fN-fs)   % worst-case objective function accuracy

% The result should be
% param.L/2/(2*N+1)

Acknowledgments

The authors would like to thank Francois Gonze from UCLouvain and Yoel Drori from Google Inc. for their feedbacks on preliminary versions of the toolbox.

Additional material was incorporated thanks to:

Last but not least, we thank Loic Estève (Inria) for technical support.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].