All Projects → sgubianpm → sdaopt

sgubianpm / sdaopt

Licence: BSD-2-Clause license
Simulated Dual Annealing for python and benchmarks

Programming Languages

python
139335 projects - #7 most used programming language
r
7636 projects

Projects that are alternatives of or similar to sdaopt

psopy
A SciPy compatible super fast Python implementation for Particle Swarm Optimization.
Stars: ✭ 33 (+120%)
Mutual labels:  scipy, optimization-algorithms
PyCannyEdge
Educational Python implementation of the Canny Edge Detector
Stars: ✭ 31 (+106.67%)
Mutual labels:  scipy
Micropython Ulab
a numpy-like fast vector module for micropython, circuitpython, and their derivatives
Stars: ✭ 166 (+1006.67%)
Mutual labels:  scipy
Ml Feynman Experience
A collection of analytics methods implemented with Python on Google Colab
Stars: ✭ 217 (+1346.67%)
Mutual labels:  scipy
Pyhf
pure-Python HistFactory implementation with tensors and autodiff
Stars: ✭ 171 (+1040%)
Mutual labels:  scipy
Orange3
🍊 📊 💡 Orange: Interactive data analysis
Stars: ✭ 3,152 (+20913.33%)
Mutual labels:  scipy
Scipy con 2019
Tutorial Sessions for SciPy Con 2019
Stars: ✭ 142 (+846.67%)
Mutual labels:  scipy
jupyter boilerplate
Adds a customizable menu item to Jupyter (IPython) notebooks to insert boilerplate snippets of code
Stars: ✭ 69 (+360%)
Mutual labels:  scipy
skan
Python module to analyse skeleton (thin object) images
Stars: ✭ 92 (+513.33%)
Mutual labels:  scipy
Cheatsheets Ai
Essential Cheat Sheets for deep learning and machine learning researchers https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5
Stars: ✭ 14,095 (+93866.67%)
Mutual labels:  scipy
Sparse dot topn
Python package to accelerate the sparse matrix multiplication and top-n similarity selection
Stars: ✭ 202 (+1246.67%)
Mutual labels:  scipy
Fatiando
Python toolkit for modeling and inversion in geophysics. DEPRECATED in favor of our newer libraries (see www.fatiando.org)
Stars: ✭ 179 (+1093.33%)
Mutual labels:  scipy
scipy con 2019
Tutorial Sessions for SciPy Con 2019
Stars: ✭ 262 (+1646.67%)
Mutual labels:  scipy
Psi4numpy
Combining Psi4 and Numpy for education and development.
Stars: ✭ 170 (+1033.33%)
Mutual labels:  scipy
xkcd-2048
No description or website provided.
Stars: ✭ 12 (-20%)
Mutual labels:  scipy
Symfit
Symbolic Fitting; fitting as it should be.
Stars: ✭ 167 (+1013.33%)
Mutual labels:  scipy
Pybotics
The Python Toolbox for Robotics
Stars: ✭ 192 (+1180%)
Mutual labels:  scipy
The Elements Of Statistical Learning Notebooks
Jupyter notebooks for summarizing and reproducing the textbook "The Elements of Statistical Learning" 2/E by Hastie, Tibshirani, and Friedman
Stars: ✭ 241 (+1506.67%)
Mutual labels:  scipy
pybnb
A parallel branch-and-bound engine for Python. (https://pybnb.readthedocs.io/)
Stars: ✭ 53 (+253.33%)
Mutual labels:  optimization-algorithms
CNCC-2019
Computational Neuroscience Crash Course (CNCC 2019)
Stars: ✭ 26 (+73.33%)
Mutual labels:  scipy

SDAopt

Simmulated Dual Annealing global optimization algorithm implementation and extensive benchmark.

Testing functions used in the benchmark (except suttonchen) have been implemented by Andreas Gavana, Andrew Nelson and scipy contributors and have been forked from SciPy project.

Results of the benchmarks are available at: https://gist.github.com/sgubianpm/7d55f8d3ba5c9de4e9f0f1ffff1aa6cf

Minimum requirements to run the benchmarks is to have scipy installed. Other dependencies are managed in the setup.py file. Running the benchmark is very CPU intensive and require a multicore machine or a cluster infrastructure.

This algorithm is now available in SciPy optimization toolkit: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.dual_annealing.html#scipy.optimize.dual_annealing

Installation from source

git clone https://github.com/sgubianpm/sdaopt.git
cd sdaopt
# Activate your appropriate python virtual environment if needed
python setup.py install

How to use it

  1. Simple example
from sdaopt import sda
def rosenbrock(x):
    return(100 * (x[1]-x[0] ** 2) ** 2 + (1 - x[0]) ** 2) 

ret = sda(rosenbrock, None, [(-30, 30)] * 2)

print("global minimum:\nxmin = {0}\nf(xmin) = {1}".format(
    ret.x, ret.fun))
  1. More complex example
import numpy as np
from sdaopt import sda

global nb_call
nb_call = 0
global glob_reached
global_reached = False
np.random.seed(1234)
dimension = 50
# Setting assymetric lower and upper bounds
lower = np.array([-5.12] * dimension)
upper = np.array([10.24] * dimension)

# Generating a random initial point
x_init = lower + np.random.rand(dimension) * (upper - lower)

# Defining a modified Rastring function with dimension 50 shifted by 3.14159
def modified_rastrigin(x):
    shift = 3.14159
    global nb_call
    global global_reached
    res = np.sum((x - shift) ** 2 - 10 * np.cos(2 * np.pi * (
        x - shift))) + 10 * np.size(x)
    if res <= 1e-8:
        global_reached = True
    if not global_reached:
        nb_call += 1
    return(res)

ret = sda(modified_rastrigin, x_init, bounds=list(zip(lower, upper)))

print(('Although sdaopt finished after {0} function calls,\n'
    'sdaopt actually has found the global minimum after {1} function calls.\n'
    'global minimum: xmin =\n{2}\nf(xmin) = {3}'
    ).format(ret.ncall, nb_call, ret.x, ret.fun))

Running benchmark on a multicore machine

# Activate your appropriate python virtual environment if needed
# Replace NB_RUNS by your values (default value is 100)
# NB_RUNS is the number of runs done for each testing function and algorithm used
# The script uses all available cores on the machine.
sdaopt_bench --nb-runs NB_RUNS

Running benchmark on a cluster (Example for Moab/TORQUE)

The total number of testing functions is 261. The benchmark can be parallelized on 261 cores of the cluster infrastructure, the benchmarks are embarassingly parallel. If you cluster nodes have 16 cores, 17 sections will be required for splitting the processing (261 / 16 = 16.3125, so 17 sections)

Below a script content example for Maob/TORQUE:

#!/bin/bash
# Replace OUTPUT_FOLDER by your the path of your choice
# Adjust YOUR_PYTHON_VIRTUAL_ENV and YOUR_SDAOPT_GIT_FOLDER
##### These lines are for Moab
#MSUB -l procs=16
#MSUB -q long
#MSUB -o OUTPUT_FOLDER/bench.out
#MSUB -e OUTPUT_FOLDER/bench.err
source YOUR_PYTHON_VIRTUAL_ENV/bin/activate 
sda_bench --nb-runs 100 --output-folder OUTPUT_FOLDER 

On your machine that is able to submit jobs to the cluster

for i in {0..16}
    do
        msub -v USE_CLUSTER,NB_CORES=16,SECTION_NUM=$i benchmark-cluster.sh
    done
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].