All Projects → CyberAgentAILab → cmaes

CyberAgentAILab / cmaes

Licence: MIT license
Python library for CMA Evolution Strategy.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to cmaes

tuneta
Intelligently optimizes technical indicators and optionally selects the least intercorrelated for use in machine learning models
Stars: ✭ 77 (-55.75%)
Mutual labels:  hyperparameter-optimization, optuna
allennlp-optuna
⚡️ AllenNLP plugin for adding subcommands to use Optuna, making hyperparameter optimization easy
Stars: ✭ 33 (-81.03%)
Mutual labels:  hyperparameter-optimization, optuna
optkeras
OptKeras: wrapper around Keras and Optuna for hyperparameter optimization
Stars: ✭ 29 (-83.33%)
Mutual labels:  hyperparameter-optimization, optuna
optuna-dashboard
Real-time Web Dashboard for Optuna.
Stars: ✭ 240 (+37.93%)
Mutual labels:  hyperparameter-optimization, optuna
NaiveNASflux.jl
Your local Flux surgeon
Stars: ✭ 20 (-88.51%)
Mutual labels:  hyperparameter-optimization
naturalselection
A general-purpose pythonic genetic algorithm.
Stars: ✭ 17 (-90.23%)
Mutual labels:  hyperparameter-optimization
neptune-client
📒 Experiment tracking tool and model registry
Stars: ✭ 348 (+100%)
Mutual labels:  optuna
keras-hypetune
A friendly python package for Keras Hyperparameters Tuning based only on NumPy and hyperopt.
Stars: ✭ 47 (-72.99%)
Mutual labels:  hyperparameter-optimization
cli
Polyaxon Core Client & CLI to streamline MLOps
Stars: ✭ 18 (-89.66%)
Mutual labels:  hyperparameter-optimization
textlearnR
A simple collection of well working NLP models (Keras, H2O, StarSpace) tuned and benchmarked on a variety of datasets.
Stars: ✭ 16 (-90.8%)
Mutual labels:  hyperparameter-optimization
ProxGradPytorch
PyTorch implementation of Proximal Gradient Algorithms a la Parikh and Boyd (2014). Useful for Auto-Sizing (Murray and Chiang 2015, Murray et al. 2019).
Stars: ✭ 28 (-83.91%)
Mutual labels:  hyperparameter-optimization
differential-privacy-bayesian-optimization
This repo contains the underlying code for all the experiments from the paper: "Automatic Discovery of Privacy-Utility Pareto Fronts"
Stars: ✭ 22 (-87.36%)
Mutual labels:  hyperparameter-optimization
mltb
Machine Learning Tool Box
Stars: ✭ 25 (-85.63%)
Mutual labels:  hyperparameter-optimization
bbai
Set model hyperparameters using deterministic, exact algorithms.
Stars: ✭ 19 (-89.08%)
Mutual labels:  hyperparameter-optimization
autotune
Autonomous Performance Tuning for Kubernetes !
Stars: ✭ 84 (-51.72%)
Mutual labels:  hyperparameter-optimization
ultraopt
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. 比HyperOpt更强的分布式异步超参优化库。
Stars: ✭ 93 (-46.55%)
Mutual labels:  hyperparameter-optimization
hone
A shell-friendly hyperparameter search tool inspired by Optuna
Stars: ✭ 17 (-90.23%)
Mutual labels:  hyperparameter-optimization
randopt
Streamlined machine learning experiment management.
Stars: ✭ 108 (-37.93%)
Mutual labels:  hyperparameter-optimization
ml-pipeline
Using Kafka-Python to illustrate a ML production pipeline
Stars: ✭ 90 (-48.28%)
Mutual labels:  hyperparameter-optimization
miraiml
MiraiML: asynchronous, autonomous and continuous Machine Learning in Python
Stars: ✭ 23 (-86.78%)
Mutual labels:  hyperparameter-optimization

CMA-ES

Software License PyPI - Downloads

Lightweight Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [1] implementation.

visualize-six-hump-camel

News

Installation

Supported Python versions are 3.6 or later.

$ pip install cmaes

Or you can install via conda-forge.

$ conda install -c conda-forge cmaes

Usage

This library provides an "ask-and-tell" style interface.

import numpy as np
from cmaes import CMA

def quadratic(x1, x2):
    return (x1 - 3) ** 2 + (10 * (x2 + 2)) ** 2

if __name__ == "__main__":
    optimizer = CMA(mean=np.zeros(2), sigma=1.3)

    for generation in range(50):
        solutions = []
        for _ in range(optimizer.population_size):
            x = optimizer.ask()
            value = quadratic(x[0], x[1])
            solutions.append((x, value))
            print(f"#{generation} {value} (x1={x[0]}, x2 = {x[1]})")
        optimizer.tell(solutions)

And you can use this library via Optuna [2], an automatic hyperparameter optimization framework. Optuna's built-in CMA-ES sampler which uses this library under the hood is available from v1.3.0 and stabled at v2.0.0. See the documentation or v2.0 release blog for more details.

import optuna

def objective(trial: optuna.Trial):
    x1 = trial.suggest_uniform("x1", -4, 4)
    x2 = trial.suggest_uniform("x2", -4, 4)
    return (x1 - 3) ** 2 + (10 * (x2 + 2)) ** 2

if __name__ == "__main__":
    sampler = optuna.samplers.CmaEsSampler()
    study = optuna.create_study(sampler=sampler)
    study.optimize(objective, n_trials=250)

CMA-ES variants

Warm Starting CMA-ES [3]

Warm Starting CMA-ES is a method to transfer prior knowledge on similar HPO tasks through the initialization of CMA-ES. Here is the result of an experiment that tuning LightGBM for Kaggle's Toxic Comment Classification Challenge data, a multilabel classification dataset. In this benchmark, we use 10% of a full dataset as the source task, and a full dataset as the target task. Please refer the paper and/or https://github.com/c-bata/benchmark-warm-starting-cmaes for more details of experiment settings.

benchmark-lightgbm-toxic

Source code
import numpy as np
from cmaes import CMA, get_warm_start_mgd

def source_task(x1: float, x2: float) -> float:
    b = 0.4
    return (x1 - b) ** 2 + (x2 - b) ** 2

def target_task(x1: float, x2: float) -> float:
    b = 0.6
    return (x1 - b) ** 2 + (x2 - b) ** 2

if __name__ == "__main__":
    # Generate solutions from a source task
    source_solutions = []
    for _ in range(1000):
        x = np.random.random(2)
        value = source_task(x[0], x[1])
        source_solutions.append((x, value))

    # Estimate a promising distribution of the source task,
    # then generate parameters of the multivariate gaussian distribution.
    ws_mean, ws_sigma, ws_cov = get_warm_start_mgd(
        source_solutions, gamma=0.1, alpha=0.1
    )
    optimizer = CMA(mean=ws_mean, sigma=ws_sigma, cov=ws_cov)

    # Run WS-CMA-ES
    print(" g    f(x1,x2)     x1      x2  ")
    print("===  ==========  ======  ======")
    while True:
        solutions = []
        for _ in range(optimizer.population_size):
            x = optimizer.ask()
            value = target_task(x[0], x[1])
            solutions.append((x, value))
            print(
                f"{optimizer.generation:3d}  {value:10.5f}"
                f"  {x[0]:6.2f}  {x[1]:6.2f}"
            )
        optimizer.tell(solutions)

        if optimizer.should_stop():
            break

The full source code is available here.

Separable CMA-ES [4]

sep-CMA-ES is an algorithm which constrains the covariance matrix to be diagonal. Due to the reduction of the number of parameters, the learning rate for the covariance matrix can be increased. Consequently, this algorithm outperforms CMA-ES on separable functions.

Source code
import numpy as np
from cmaes import SepCMA

def ellipsoid(x):
    n = len(x)
    if len(x) < 2:
        raise ValueError("dimension must be greater one")
    return sum([(1000 ** (i / (n - 1)) * x[i]) ** 2 for i in range(n)])

if __name__ == "__main__":
    dim = 40
    optimizer = SepCMA(mean=3 * np.ones(dim), sigma=2.0)
    print(" evals    f(x)")
    print("======  ==========")

    evals = 0
    while True:
        solutions = []
        for _ in range(optimizer.population_size):
            x = optimizer.ask()
            value = ellipsoid(x)
            evals += 1
            solutions.append((x, value))
            if evals % 3000 == 0:
                print(f"{evals:5d}  {value:10.5f}")
        optimizer.tell(solutions)

        if optimizer.should_stop():
            break

Full source code is available here.

IPOP-CMA-ES [5]

IPOP-CMA-ES is a method to restart CMA-ES with increasing population size like below.

visualize-ipop-cmaes-himmelblau

Source code
import math
import numpy as np
from cmaes import CMA

def ackley(x1, x2):
    # https://www.sfu.ca/~ssurjano/ackley.html
    return (
        -20 * math.exp(-0.2 * math.sqrt(0.5 * (x1 ** 2 + x2 ** 2)))
        - math.exp(0.5 * (math.cos(2 * math.pi * x1) + math.cos(2 * math.pi * x2)))
        + math.e + 20
    )

if __name__ == "__main__":
    bounds = np.array([[-32.768, 32.768], [-32.768, 32.768]])
    lower_bounds, upper_bounds = bounds[:, 0], bounds[:, 1]

    mean = lower_bounds + (np.random.rand(2) * (upper_bounds - lower_bounds))
    sigma = 32.768 * 2 / 5  # 1/5 of the domain width
    optimizer = CMA(mean=mean, sigma=sigma, bounds=bounds, seed=0)

    for generation in range(200):
        solutions = []
        for _ in range(optimizer.population_size):
            x = optimizer.ask()
            value = ackley(x[0], x[1])
            solutions.append((x, value))
            print(f"#{generation} {value} (x1={x[0]}, x2 = {x[1]})")
        optimizer.tell(solutions)

        if optimizer.should_stop():
            # popsize multiplied by 2 (or 3) before each restart.
            popsize = optimizer.population_size * 2
            mean = lower_bounds + (np.random.rand(2) * (upper_bounds - lower_bounds))
            optimizer = CMA(mean=mean, sigma=sigma, population_size=popsize)
            print(f"Restart CMA-ES with popsize={popsize}")

Full source code is available here.

BIPOP-CMA-ES [6]

BIPOP-CMA-ES applies two interlaced restart strategies, one with an increasing population size and one with varying small population sizes.

visualize-bipop-cmaes-himmelblau

Source code
import math
import numpy as np
from cmaes import CMA

def ackley(x1, x2):
    # https://www.sfu.ca/~ssurjano/ackley.html
    return (
        -20 * math.exp(-0.2 * math.sqrt(0.5 * (x1 ** 2 + x2 ** 2)))
        - math.exp(0.5 * (math.cos(2 * math.pi * x1) + math.cos(2 * math.pi * x2)))
        + math.e + 20
    )

if __name__ == "__main__":
    bounds = np.array([[-32.768, 32.768], [-32.768, 32.768]])
    lower_bounds, upper_bounds = bounds[:, 0], bounds[:, 1]

    mean = lower_bounds + (np.random.rand(2) * (upper_bounds - lower_bounds))
    sigma = 32.768 * 2 / 5  # 1/5 of the domain width
    optimizer = CMA(mean=mean, sigma=sigma, bounds=bounds, seed=0)

    n_restarts = 0  # A small restart doesn't count in the n_restarts
    small_n_eval, large_n_eval = 0, 0
    popsize0 = optimizer.population_size
    inc_popsize = 2

    # Initial run is with "normal" population size; it is
    # the large population before first doubling, but its
    # budget accounting is the same as in case of small
    # population.
    poptype = "small"

    for generation in range(200):
        solutions = []
        for _ in range(optimizer.population_size):
            x = optimizer.ask()
            value = ackley(x[0], x[1])
            solutions.append((x, value))
            print(f"#{generation} {value} (x1={x[0]}, x2 = {x[1]})")
        optimizer.tell(solutions)

        if optimizer.should_stop():
            n_eval = optimizer.population_size * optimizer.generation
            if poptype == "small":
                small_n_eval += n_eval
            else:  # poptype == "large"
                large_n_eval += n_eval

            if small_n_eval < large_n_eval:
                poptype = "small"
                popsize_multiplier = inc_popsize ** n_restarts
                popsize = math.floor(
                    popsize0 * popsize_multiplier ** (np.random.uniform() ** 2)
                )
            else:
                poptype = "large"
                n_restarts += 1
                popsize = popsize0 * (inc_popsize ** n_restarts)

            mean = lower_bounds + (np.random.rand(2) * (upper_bounds - lower_bounds))
            optimizer = CMA(
                mean=mean,
                sigma=sigma,
                bounds=bounds,
                population_size=popsize,
            )
            print("Restart CMA-ES with popsize={} ({})".format(popsize, poptype))

Full source code is available here.

Benchmark results

Rosenbrock function Six-Hump Camel function
rosenbrock six-hump-camel

This implementation (green) stands comparison with pycma (blue). See benchmark for details.

Links

Other libraries:

I respect all libraries involved in CMA-ES.

  • pycma : Most famous CMA-ES implementation by Nikolaus Hansen.
  • pymoo : Multi-objective optimization in Python.

References:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].