SimonBlanke / Gradient Free Optimizers
Programming Languages
Projects that are alternatives of or similar to Gradient Free Optimizers
Simple and reliable optimization with local, global, population-based and sequential techniques in numerical discrete search spaces.
Master status: | |
Code quality: | |
Latest versions: |
Introduction
Gradient-Free-Optimizers provides a collection of easy to use optimization techniques, whose objective function only requires an arbitrary score that gets maximized. This makes gradient-free methods capable of solving various optimization problems, including:
- Optimizing arbitrary mathematical functions.
- Fitting multiple gauss-distributions to data.
- Hyperparameter-optimization of machine-learning methods.
Gradient-Free-Optimizers is the optimization backend of Hyperactive (in v3.0.0 and higher) but it can also be used by itself as a leaner and simpler optimization toolkit.
Main features
-
Easy to use:
Simple API-design
You can optimize anything that can be defined in a python function. For example a simple parabola function:
def objective_function(para): score = para["x1"] * para["x1"] return -score
Define where to search via numpy ranges:
search_space = { "x": np.arange(0, 5, 0.1), }
That`s all the information the algorithm needs to search for the maximum in the objective function:
from gradient_free_optimizers import RandomSearchOptimizer opt = RandomSearchOptimizer(search_space) opt.search(objective_function, n_iter=100000)
Receive prepared information about ongoing and finished optimization runs
During the optimization you will receive ongoing information in a progress bar:
- current best score
- the position in the search space of the current best score
- the iteration when the current best score was found
- other information about the progress native to tqdm
-
High performance:
Modern optimization techniques
Gradient-Free-Optimizers provides not just meta-heuristic optimization methods but also sequential model based optimizers like bayesian optimization, which delivers good results for expensive objetive functions like deep-learning models.
Lightweight backend
Even for the very simple parabola function the optimization time is about 60% of the entire iteration time when optimizing with random search. This shows, that (despite all its features) Gradient-Free-Optimizers has an efficient optimization backend without any unnecessary slowdown.
Save time with memory dictionary
Per default Gradient-Free-Optimizers will look for the current position in a memory dictionary before evaluating the objective function.
-
If the position is not in the dictionary the objective function will be evaluated and the position and score is saved in the dictionary.
-
If a position is already saved in the dictionary Gradient-Free-Optimizers will just extract the score from it instead of evaluating the objective function. This avoids reevaluating computationally expensive objective functions (machine- or deep-learning) and therefore saves time.
-
-
High reliability:
Extensive testing
Gradient-Free-Optimizers is extensivly tested with more than 400 tests in 2500 lines of test code. This includes the testing of:
- Each optimization algorithm
- Each optimization parameter
- All attributes that are part of the public api
Performance test for each optimizer
Each optimization algorithm must perform above a certain threshold to be included. Poorly performing algorithms are reworked or scraped.
Optimization strategies:
Gradient-Free-Optimizers supports a variety of optimization algorithms, which can make choosing the right algorithm a tedious endeavor. The gifs in this section give a visual representation how the different optimization algorithms explore the search space and exploit the collected information about the search space for a convex and non-convex objective function.