All Projects → openmole → Mgo

openmole / Mgo

Purely functional genetic algorithms for multi-objective optimisation

Programming Languages

scala
5932 projects

Projects that are alternatives of or similar to Mgo

Hyperparameter Optimization Of Machine Learning Algorithms
Implementation of hyperparameter optimization/tuning methods for machine learning & deep learning models (easy&clear)
Stars: ✭ 516 (+719.05%)
Mutual labels:  genetic-algorithm, hyperparameter-optimization, hyperparameter-tuning
naturalselection
A general-purpose pythonic genetic algorithm.
Stars: ✭ 17 (-73.02%)
Mutual labels:  genetic-algorithm, hyperparameter-optimization, hyperparameter-tuning
syne-tune
Large scale and asynchronous Hyperparameter Optimization at your fingertip.
Stars: ✭ 105 (+66.67%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
bbopt
Black box hyperparameter optimization made easy.
Stars: ✭ 66 (+4.76%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Sherpa
Hyperparameter optimization that enables researchers to experiment, visualize, and scale quickly.
Stars: ✭ 289 (+358.73%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
differential-privacy-bayesian-optimization
This repo contains the underlying code for all the experiments from the paper: "Automatic Discovery of Privacy-Utility Pareto Fronts"
Stars: ✭ 22 (-65.08%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
maggy
Distribution transparent Machine Learning experiments on Apache Spark
Stars: ✭ 83 (+31.75%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
mindware
An efficient open-source AutoML system for automating machine learning lifecycle, including feature engineering, neural architecture search, and hyper-parameter tuning.
Stars: ✭ 34 (-46.03%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
scikit-hyperband
A scikit-learn compatible implementation of hyperband
Stars: ✭ 68 (+7.94%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Hyperband
Tuning hyperparams fast with Hyperband
Stars: ✭ 555 (+780.95%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Smac3
Sequential Model-based Algorithm Configuration
Stars: ✭ 564 (+795.24%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Hyperparameter hunter
Easy hyperparameter optimization and automatic result saving across machine learning algorithms and libraries
Stars: ✭ 648 (+928.57%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
mango
Parallel Hyperparameter Tuning in Python
Stars: ✭ 241 (+282.54%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
mltb
Machine Learning Tool Box
Stars: ✭ 25 (-60.32%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
mlr3tuning
Hyperparameter optimization package of the mlr3 ecosystem
Stars: ✭ 44 (-30.16%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
polystores
A library for performing hyperparameter optimization
Stars: ✭ 48 (-23.81%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Lale
Library for Semi-Automated Data Science
Stars: ✭ 198 (+214.29%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Hypernets
A General Automated Machine Learning framework to simplify the development of End-to-end AutoML toolkits in specific domains.
Stars: ✭ 221 (+250.79%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Neuraxle
A Sklearn-like Framework for Hyperparameter Tuning and AutoML in Deep Learning projects. Finally have the right abstractions and design patterns to properly do AutoML. Let your pipeline steps have hyperparameter spaces. Enable checkpoints to cut duplicate calculations. Go from research to production environment easily.
Stars: ✭ 377 (+498.41%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning
Auto Sklearn
Automated Machine Learning with scikit-learn
Stars: ✭ 5,916 (+9290.48%)
Mutual labels:  hyperparameter-optimization, hyperparameter-tuning

MGO

MGO is a purely functionnal scala library based for evolutionary / genetic algorithms:

  • enforcing immutability,
  • exposes a modular and extensible architecture,
  • implements state of the art algorithms,
  • handles noisy (stochastic) fitness functions,
  • implements auto-adaptatative algortihms,
  • implements algorithms with distributed computing in mind for integration with OpenMOLE.

MGO implements NGSAII, NSGA3, CP (Calibration Profile), PSE (Pattern Search Experiment), OSE (Antecedant research), Niched Evolution, ABC (Bayesian Calibration).

Licence

MGO is licenced under the GNU GPLv3 software licence. 

Example

Define a problem, for instance the multi-modal multi-objective ZDT4 benchmark:

  object zdt4 {

    def continuous(size: Int) = Vector.fill(size)(C(0.0, 5.0))
    
    def compute(genome: Vector[Double], d: Vector[Int]): Vector[Double] = {
      val genomeSize = genome.size

      def g(x: Seq[Double]) = 1 + 10 * (genomeSize - 1) + x.map { i => pow(i, 2) - 10 * cos(4 * Pi * i) }.sum

      def f(x: Seq[Double]) = {
        val gx = g(x)
        gx * (1 - sqrt(genome(0) / gx))
      }

      Vector(genome(0), f(genome.tail))
    }

 }

Define the optimisation algorithm, for instance NSGAII:

  import mgo.evolution._
  import mgo.evolution.algorithm._
  
  // For zdt4
  import mgo.test._

  val nsga2 =
    NSGA2(
      mu = 100,
      lambda = 100,
      fitness = zdt4.compute,
      continuous = zdt4.continuous(10))

Run the optimisation:

  def evolution =
    nsga2.
      until(afterGeneration(1000)).
      trace((s, is) => println(s.generation))

  val (finalState, finalPopulation) = evolution.eval(new util.Random(42))

  println(NSGA2.result(nsga2, finalPopulation).mkString("\n"))
  

Noisy fitness functions

All algorithm in MGO have version to compute on noisy fitness function. MGO handle noisy fitness functions by resampling only the most promising individuals. It uses an aggregation function to aggregate the multiple sample when needed.

For instance a version of NSGA2 for noisy fitness functions may be used has follow:

  import mgo._
  import algorithm.noisynsga2._
  import context.implicits._

  object sphere {
    def scale(s: Vector[Double]): Vector[Double] = s.map(_.scale(-2, 2))
    def compute(i: Vector[Double]): Double = i.map(x => x * x).sum
  }

  object noisySphere {
    def scale(s: Vector[Double]): Vector[Double] = sphere.scale(s)
    def compute(rng: util.Random, v: Vector[Double]) =
      sphere.compute(v) + rng.nextGaussian() * 0.5 * math.sqrt(sphere.compute(v))
  }

  def aggregation(history: Vector[Vector[Double]]) = history.transpose.map { o => o.sum / o.size }

  val nsga2 =
    NoisyNSGA2(
      mu = 100,
      lambda = 100,
      fitness = (rng, v) => Vector(noisySphere.compute(rng, v)),
      aggregation = aggregation,
      genomeSize = 2)

  val (finalState, finalPopulation) =
    run(nsga2).
      until(afterGeneration(1000)).
      trace((s, is) => println(s.generation)).
      eval(new util.Random(42))

  println(result(finalPopulation, aggregation, noisySphere.scale).mkString("\n"))

Diversity only

MGO proposes the PSE alorithm that aim a creating diverse solution instead of optimsing a function. The paper about this algorithm can be found here.

  import mgo._
  import algorithm.pse._
  import context.implicits._

  val pse = PSE(
    lambda = 10,
    phenotype = zdt4.compute,
    pattern =
      boundedGrid(
        lowBound = Vector(0.0, 0.0),
        highBound = Vector(1.0, 200.0),
        definition = Vector(10, 10)),
    genomeSize = 10)

  val (finalState, finalPopulation) =
    run(pse).
      until(afterGeneration(1000)).
      trace((s, is) => println(s.generation)).
      eval(new util.Random(42))

  println(result(finalPopulation, zdt4.scale).mkString("\n"))

This program explores all the different combination of values that can be produced by the multi-objective function of ZDT4.

For more examples, have a look at the main/scala/fr/iscpif/mgo/test directory in the repository.

Mixed optimisation and diversity

The calibration profile algorthim compute the best fitness function for a set of niches. This algorithm is explained here.

In MGO you can compute profiles of a 10 dimensional hyper-sphere function using the following:

  import algorithm.profile._
  import context.implicits._

  //Profile the first dimension of the genome
  val algo = Profile(
    lambda = 100,
    fitness = sphere.compute,
    niche = genomeProfile(x = 0, nX = 10),
    genomeSize = 10)

  val (finalState, finalPopulation) =
    run(algo).
      until(afterGeneration(1000)).
      trace((s, is) => println(s.generation)).
      eval(new util.Random(42))

  println(result(finalPopulation, sphere.scale).mkString("\n"))

Noisy profiles

All algorithms in MGO have a pendant for noisy fitness function. Here is an example of a profile computation for a sphere function with noise.

  import algorithm.noisyprofile._
  import context.implicits._

  def aggregation(history: Vector[Double]) = history.sum / history.size
  def niche = genomeProfile(x = 0, nX = 10)

  val algo = NoisyProfile(
    muByNiche = 20,
    lambda = 100,
    fitness = noisySphere.compute,
    aggregation = aggregation,
    niche = niche,
    genomeSize = 5)

  val (finalState, finalPopulation) =
    run(algo).
      until(afterGeneration(1000)).
      trace((s, is) => println(s.generation)).
      eval(new util.Random(42))

  println(result(finalPopulation, aggregation, noisySphere.scale, niche).mkString("\n"))

Distributed computing

Algorithms implemented in MGO are also avialiable in the workflow plateform for distributed computing OpenMOLE.

SBT dependency

  libraryDependencies += "fr.iscpif" %% "mgo" % "2.45"  
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].