All Projects → SimonBlanke → Gradient Free Optimizers

SimonBlanke / Gradient Free Optimizers

Licence: mit
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Gradient Free Optimizers

Cornell Moe
A Python library for the state-of-the-art Bayesian optimization algorithms, with the core implemented in C++.
Stars: ✭ 198 (-72.15%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization, optimization
Hyperopt.jl
Hyperparameter optimization in Julia.
Stars: ✭ 144 (-79.75%)
Mutual labels:  optimization, hyperparameter-optimization, bayesian-optimization
ultraopt
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. 比HyperOpt更强的分布式异步超参优化库。
Stars: ✭ 93 (-86.92%)
Mutual labels:  optimization, hyperparameter-optimization, bayesian-optimization
Chocolate
A fully decentralized hyperparameter optimization framework
Stars: ✭ 112 (-84.25%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization, optimization
Simple
Experimental Global Optimization Algorithm
Stars: ✭ 450 (-36.71%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization, optimization
Mlrmbo
Toolbox for Bayesian Optimization and Model-Based Optimization in R
Stars: ✭ 173 (-75.67%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization, optimization
Scikit Optimize
Sequential model-based optimization with a `scipy.optimize` interface
Stars: ✭ 2,258 (+217.58%)
Mutual labels:  bayesian-optimization, optimization, hyperparameter-optimization
Hyperactive
A hyperparameter optimization and data collection toolbox for convenient and fast prototyping of machine-learning models.
Stars: ✭ 182 (-74.4%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization, optimization
Hyperparameter Optimization Of Machine Learning Algorithms
Implementation of hyperparameter optimization/tuning methods for machine learning & deep learning models (easy&clear)
Stars: ✭ 516 (-27.43%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization, optimization
osprey
🦅Hyperparameter optimization for machine learning pipelines 🦅
Stars: ✭ 71 (-90.01%)
Mutual labels:  optimization, hyperparameter-optimization
syne-tune
Large scale and asynchronous Hyperparameter Optimization at your fingertip.
Stars: ✭ 105 (-85.23%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization
hyper-engine
Python library for Bayesian hyper-parameters optimization
Stars: ✭ 80 (-88.75%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization
differential-privacy-bayesian-optimization
This repo contains the underlying code for all the experiments from the paper: "Automatic Discovery of Privacy-Utility Pareto Fronts"
Stars: ✭ 22 (-96.91%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization
mindware
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-95.22%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization
mango
Parallel Hyperparameter Tuning in Python
Stars: ✭ 241 (-66.1%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization
mlrHyperopt
Easy Hyper Parameter Optimization with mlr and mlrMBO.
Stars: ✭ 30 (-95.78%)
Mutual labels:  optimization, hyperparameter-optimization
Ray
An open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library.
Stars: ✭ 18,547 (+2508.58%)
Mutual labels:  hyperparameter-optimization, optimization
CamelsOptimizer
Yes, it's a camel case.
Stars: ✭ 17 (-97.61%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization
Auto Sklearn
Automated Machine Learning with scikit-learn
Stars: ✭ 5,916 (+732.07%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization
Hpbandster
a distributed Hyperband implementation on Steroids
Stars: ✭ 456 (-35.86%)
Mutual labels:  hyperparameter-optimization, bayesian-optimization





Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.


Master status: img not loaded: try F5 :) img not loaded: try F5 :)
Code quality: img not loaded: try F5 :) img not loaded: try F5 :)
Latest versions: img not loaded: try F5 :)

Introduction

Gradient-Free-Optimizers provides a collection of easy to use optimization techniques, whose objective function only requires an arbitrary score that gets maximized. This makes gradient-free methods capable of solving various optimization problems, including:

  • Optimizing arbitrary mathematical functions.
  • Fitting multiple gauss-distributions to data.
  • Hyperparameter-optimization of machine-learning methods.

Gradient-Free-Optimizers is the optimization backend of Hyperactive (in v3.0.0 and higher) but it can also be used by itself as a leaner and simpler optimization toolkit.





Main features

  • Easy to use:

    Simple API-design

    You can optimize anything that can be defined in a python function. For example a simple parabola function:

    def objective_function(para):
        score = para["x1"] * para["x1"]
        return -score
    

    Define where to search via numpy ranges:

    search_space = {
        "x": np.arange(0, 5, 0.1),
    }
    

    That`s all the information the algorithm needs to search for the maximum in the objective function:

    from gradient_free_optimizers import RandomSearchOptimizer
    
    opt = RandomSearchOptimizer(search_space)
    opt.search(objective_function, n_iter=100000)
    
    Receive prepared information about ongoing and finished optimization runs

    During the optimization you will receive ongoing information in a progress bar:

    • current best score
    • the position in the search space of the current best score
    • the iteration when the current best score was found
    • other information about the progress native to tqdm
  • High performance:

    Modern optimization techniques

    Gradient-Free-Optimizers provides not just meta-heuristic optimization methods but also sequential model based optimizers like bayesian optimization, which delivers good results for expensive objetive functions like deep-learning models.

    Lightweight backend

    Even for the very simple parabola function the optimization time is about 60% of the entire iteration time when optimizing with random search. This shows, that (despite all its features) Gradient-Free-Optimizers has an efficient optimization backend without any unnecessary slowdown.

    Save time with memory dictionary

    Per default Gradient-Free-Optimizers will look for the current position in a memory dictionary before evaluating the objective function.

    • If the position is not in the dictionary the objective function will be evaluated and the position and score is saved in the dictionary.

    • If a position is already saved in the dictionary Gradient-Free-Optimizers will just extract the score from it instead of evaluating the objective function. This avoids reevaluating computationally expensive objective functions (machine- or deep-learning) and therefore saves time.

  • High reliability:

    Extensive testing

    Gradient-Free-Optimizers is extensivly tested with more than 400 tests in 2500 lines of test code. This includes the testing of:

    • Each optimization algorithm
    • Each optimization parameter
    • All attributes that are part of the public api
    Performance test for each optimizer

    Each optimization algorithm must perform above a certain threshold to be included. Poorly performing algorithms are reworked or scraped.


Optimization strategies:

Gradient-Free-Optimizers supports a variety of optimization algorithms, which can make choosing the right algorithm a tedious endeavor. The gifs in this section give a visual representation how the different optimization algorithms explore the search space and exploit the collected information about the search space for a convex and non-convex objective function.

Optimization Strategy Convex Function Non-convex Function
Hill Climbing

Evaluates the score of n neighbours in an epsilon environment and moves to the best one.
Repulsing Hill Climbing

Hill climbing iteration + increases epsilon by a factor if no better neighbour was found.
Simulated Annealing

Hill climbing iteration + accepts moving to worse positions with decreasing probability over time (transition probability).
Random Search

Moves to random positions in each iteration.
Random Restart Hill Climbing

Hill climbing + moves to a random position after n iterations.
Random Annealing

Hill Climbing + large epsilon that decreases over time.
Parallel Tempering

Population of n simulated annealers, which occasionally swap transition probabilities.
Particle Swarm Optimization

Population of n particles attracting each other and moving towards the best particle.
Evolution Strategy

Population of n hill climbers occasionally mixing positional information.
Bayesian Optimization

Gaussian process fitting to explored positions and predicting promising new positions.
Tree of Parzen Estimators

Kernel density estimators fitting to good and bad explored positions and predicting promising new positions.
Decision Tree Optimizer

Ensemble of decision trees fitting to explored positions and predicting promising new positions.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].