All Projects → JuliaML → StochasticOptimization.jl

JuliaML / StochasticOptimization.jl

Licence: other
Implementations of stochastic optimization algorithms and solvers

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to StochasticOptimization.jl

Julia-sublime
Julia syntax highlighting for Sublime Text
Stars: ✭ 106 (+307.69%)
Mutual labels:  julialang
genx
Genx provides modular building blocks to run simulations of optimization and search problems using Genetic Algorithms
Stars: ✭ 31 (+19.23%)
Mutual labels:  optimization
ProxSDP.jl
Semidefinite programming optimization solver
Stars: ✭ 69 (+165.38%)
Mutual labels:  optimization
OmniSci.jl
Julia client for OmniSci GPU-accelerated SQL engine and analytics platform
Stars: ✭ 22 (-15.38%)
Mutual labels:  julialang
Optimization
A set of lightweight header-only template functions implementing commonly-used optimization methods on Riemannian manifolds and convex spaces.
Stars: ✭ 66 (+153.85%)
Mutual labels:  optimization
hopperOptimizations
A mod that optimizes hoppers and their interactions with entities and inventories. It drastically reduces hopper lag without changing any behavior.
Stars: ✭ 65 (+150%)
Mutual labels:  optimization
XUnit.jl
XUnit.jl is a unit-testing framework for Julia.
Stars: ✭ 32 (+23.08%)
Mutual labels:  julialang
LBFGS-Lite
LBFGS-Lite: A header-only L-BFGS unconstrained optimizer.
Stars: ✭ 98 (+276.92%)
Mutual labels:  optimization
FaceDetection.jl
A face detection algorithm using Viola-Jones' rapid object detection framework written in Julia
Stars: ✭ 13 (-50%)
Mutual labels:  julialang
IterativeLQR.jl
A Julia package for constrained iterative LQR (iLQR)
Stars: ✭ 15 (-42.31%)
Mutual labels:  optimization
LaplacianOpt.jl
A Julia/JuMP Package for Maximizing Algebraic Connectivity of Undirected Weighted Graphs
Stars: ✭ 16 (-38.46%)
Mutual labels:  optimization
FstFileFormat.jl
Julia bindings for the fst format
Stars: ✭ 17 (-34.62%)
Mutual labels:  julialang
CSDP.jl
Julia Wrapper for CSDP (https://projects.coin-or.org/Csdp/)
Stars: ✭ 18 (-30.77%)
Mutual labels:  optimization
AtariAlgos.jl
Arcade Learning Environment (ALE) wrapped as a Reinforce.jl environment
Stars: ✭ 38 (+46.15%)
Mutual labels:  julialang
geneal
A genetic algorithm implementation in python
Stars: ✭ 47 (+80.77%)
Mutual labels:  optimization
DynamicHMCExamples.jl
Examples for Bayesian inference using DynamicHMC.jl and related packages.
Stars: ✭ 33 (+26.92%)
Mutual labels:  julialang
deoplete-julia
deoplete.nvim source for julia. Providing julia Syntax Completions for julia, in Neovim (deprecated for julia 0.6+)
Stars: ✭ 12 (-53.85%)
Mutual labels:  julialang
mysql tuning-cookbook
Chef cookbook to create MySQL configuraiton files better suited for your system.
Stars: ✭ 23 (-11.54%)
Mutual labels:  optimization
decrypticon
Java-layer Android Malware Simplifier
Stars: ✭ 17 (-34.62%)
Mutual labels:  optimization
setup-julia
This action sets up a Julia environment for use in actions by downloading a specified version of Julia and adding it to PATH.
Stars: ✭ 56 (+115.38%)
Mutual labels:  julialang

DEPRECATED

This package is deprecated.

StochasticOptimization

Build Status Gitter chat

Utilizing the JuliaML ecosystem, StochasticOptimization is a framework for iteration-based optimizers. Below is a complete example, from creating transformations, losses, penalties, and the combined objective function, to building custom sub-learners for the optimization, to constructing and running a stochastic gradient descent learner.

using StochasticOptimization
using ObjectiveFunctions
using CatViews

# Build our objective. Note this is LASSO regression.
# The objective method constucts a RegularizedObjective composed
#   of a Transformation, a Loss, and an optional Penalty.
nin, nout = 10, 1
obj = objective(
    Affine(nin,nout),
    L2DistLoss(),
    L1Penalty(1e-8)
)

# Create some fake data... affine transform plus noise
τ = 1000
w = randn(nout, nin)
b = randn(nout)
inputs = randn(nin, τ)
noise = 0.1rand(nout, τ)
targets = w * inputs + repmat(b, 1, τ) + noise

# Create a view of w and b which looks like a single vector
θ = CatView(w,b)

# The MetaLearner has a bunch of specialized sub-learners.
# Our core learning strategy is Adamax with a fixed learning rate.
# The `maxiter` and `converged` keywords will add `MaxIter`
#   and `ConvergenceFunction` sub-learners to the MetaLearner.
learner = make_learner(
    GradientLearner(5e-3, Adamax()),
    maxiter = 5000,
    converged = (model,i) -> begin
        if mod1(i,100) == 100
            if norm- params(model)) < 0.1
                info("Converged after $i iterations")
                return true
            end
        end
        false
    end
)

# Everything is set up... learn the parameters by iterating through
#   random minibatches forever until convergence, or until the max iterations.
learn!(obj, learner, infinite_batches(inputs, targets, size=20))

With any luck, you'll see something like:

INFO: Converged after 800 iterations

Notes:

Each sub-learner might only implement a subset of the iteration API:

  • pre_hook(learner, model)
  • learn!(model, learner, data)
  • iter_hook(learner, model, i)
  • finished(learner, model, i)
  • post_hook(learner, model)
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].