All Projects → TuringLang → Advancedhmc.jl

TuringLang / Advancedhmc.jl

Licence: mit
Robust, modular and efficient implementation of advanced Hamiltonian Monte Carlo algorithms

Programming Languages

julia
2034 projects

Labels

Projects that are alternatives of or similar to Advancedhmc.jl

Bayesian Neural Networks
Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace, SG-HMC and more
Stars: ✭ 900 (+800%)
Mutual labels:  mcmc
Bayesiantools
General-Purpose MCMC and SMC Samplers and Tools for Bayesian Statistics
Stars: ✭ 66 (-34%)
Mutual labels:  mcmc
Paramonte
ParaMonte: Plain Powerful Parallel Monte Carlo and MCMC Library for Python, MATLAB, Fortran, C++, C.
Stars: ✭ 88 (-12%)
Mutual labels:  mcmc
Pygtc
Make a sweet giant triangle confusogram (GTC) plot
Stars: ✭ 13 (-87%)
Mutual labels:  mcmc
Dblink
Distributed Bayesian Entity Resolution in Apache Spark
Stars: ✭ 38 (-62%)
Mutual labels:  mcmc
Gerrychain
Use MCMC to analyze districting plans and gerrymanders
Stars: ✭ 68 (-32%)
Mutual labels:  mcmc
Boltzmann Machines
Boltzmann Machines in TensorFlow with examples
Stars: ✭ 768 (+668%)
Mutual labels:  mcmc
Ggmcmc
Graphical tools for analyzing Markov Chain Monte Carlo simulations from Bayesian inference
Stars: ✭ 95 (-5%)
Mutual labels:  mcmc
Emcee
The Python ensemble sampling toolkit for affine-invariant MCMC
Stars: ✭ 1,121 (+1021%)
Mutual labels:  mcmc
Bat.jl
A Bayesian Analysis Toolkit in Julia
Stars: ✭ 82 (-18%)
Mutual labels:  mcmc
Deepbayes
Bayesian methods in deep learning Summer School
Stars: ✭ 15 (-85%)
Mutual labels:  mcmc
Cmdstan.jl
CmdStan.jl v6 provides an alternative, older Julia wrapper to Stan's `cmdstan` executable.
Stars: ✭ 30 (-70%)
Mutual labels:  mcmc
Posterior
The posterior R package
Stars: ✭ 75 (-25%)
Mutual labels:  mcmc
Owl
Owl - OCaml Scientific and Engineering Computing @ http://ocaml.xyz
Stars: ✭ 919 (+819%)
Mutual labels:  mcmc
Machine Learning Numpy
Gathers Machine learning models using pure Numpy to cover feed-forward, RNN, CNN, clustering, MCMC, timeseries, tree-based, and so much more!
Stars: ✭ 90 (-10%)
Mutual labels:  mcmc
Bda py demos
Bayesian Data Analysis demos for Python
Stars: ✭ 781 (+681%)
Mutual labels:  mcmc
Turing.jl
Bayesian inference with probabilistic programming.
Stars: ✭ 1,150 (+1050%)
Mutual labels:  mcmc
Turingmodels.jl
Turing version of StatisticalRethinking models.
Stars: ✭ 98 (-2%)
Mutual labels:  mcmc
Nimble
The base NIMBLE package for R
Stars: ✭ 95 (-5%)
Mutual labels:  mcmc
Getdist
MCMC sample analysis, kernel densities, plotting, and GUI
Stars: ✭ 80 (-20%)
Mutual labels:  mcmc

AdvancedHMC.jl

AdvancedHMC-CI DOI Coverage Status Stable Dev

AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for AdvancedHMC's usage is given below. AdvancedHMC.jl is part of Turing.jl, a probabilistic programming library in Julia. If you are interested in using AdvancedHMC.jl through a probabilistic programming language, please check it out!

Interfaces

  • IMP.hmc: an experimental Python module for the Integrative Modeling Platform, which uses AdvancedHMC in its backend to sample protein structures.

NEWS

  • We presented a paper for AdvancedHMC.jl at AABI 2019 in Vancouver, Canada. (abs, pdf, OpenReview)
  • We presented a poster for AdvancedHMC.jl at StanCon 2019 in Cambridge, UK. (pdf)

API CHANGES

  • [v0.2.22] Three functions are renamed.
    • Preconditioner(metric::AbstractMetric) -> MassMatrixAdaptor(metric) and
    • NesterovDualAveraging(δ, integrator::AbstractIntegrator) -> StepSizeAdaptor(δ, integrator)
    • find_good_eps -> find_good_stepsize
  • [v0.2.15] n_adapts is no longer needed to construct StanHMCAdaptor; the old constructor is deprecated.
  • [v0.2.8] Two Hamiltonian trajectory sampling methods are renamed to avoid a name clash with Distributions.
    • Multinomial -> MultinomialTS
    • Slice -> SliceTS
  • [v0.2.0] The gradient function passed to Hamiltonian is supposed to return a value-gradient tuple now.

A minimal example - sampling from a multivariate Gaussian using NUTS

using AdvancedHMC, Distributions, ForwardDiff

# Choose parameter dimensionality and initial parameter value
D = 10; initial_θ = rand(D)

# Define the target distribution
ℓπ(θ) = logpdf(MvNormal(zeros(D), ones(D)), θ)

# Set the number of samples to draw and warmup iterations
n_samples, n_adapts = 2_000, 1_000

# Define a Hamiltonian system
metric = DiagEuclideanMetric(D)
hamiltonian = Hamiltonian(metric, ℓπ, ForwardDiff)

# Define a leapfrog solver, with initial step size chosen heuristically
initial_ϵ = find_good_stepsize(hamiltonian, initial_θ)
integrator = Leapfrog(initial_ϵ)

# Define an HMC sampler, with the following components
#   - multinomial sampling scheme,
#   - generalised No-U-Turn criteria, and
#   - windowed adaption for step-size and diagonal mass matrix
proposal = NUTS{MultinomialTS, GeneralisedNoUTurn}(integrator)
adaptor = StanHMCAdaptor(MassMatrixAdaptor(metric), StepSizeAdaptor(0.8, integrator))

# Run the sampler to draw samples from the specified Gaussian, where
#   - `samples` will store the samples
#   - `stats` will store diagnostic statistics for each sample
samples, stats = sample(hamiltonian, proposal, initial_θ, n_samples, adaptor, n_adapts; progress=true)

Parallel sampling

AdvancedHMC enables parallel sampling (either distributed or multi-thread) via Julia's parallel computing functions. It also supports vectorized sampling for static HMC and has been discussed in more detail in the documentation here.

The below example utilizes the @threads macro to sample 4 chains across 4 threads.

# Ensure that julia was launched with appropriate number of threads
println(Threads.nthreads())

# Number of chains to sample
nchains = 4

# Cache to store the chains
chains = Vector{Any}(undef, nchains)

# The `samples` from each parallel chain is stored in the `chains` vector 
# Adjust the `verbose` flag as per need
Threads.@threads for i in 1:nchains
  samples, stats = sample(hamiltonian, proposal, initial_θ, n_samples, adaptor, n_adapts; verbose=false)
  chains[i] = samples
end

API and supported HMC algorithms

An important design goal of AdvancedHMC.jl is modularity; we would like to support algorithmic research on HMC. This modularity means that different HMC variants can be easily constructed by composing various components, such as preconditioning metric (i.e. mass matrix), leapfrog integrators, trajectories (static or dynamic), and adaption schemes etc. The minimal example above can be modified to suit particular inference problems by picking components from the list below.

Hamiltonian mass matrix (metric)

  • Unit metric: UnitEuclideanMetric(dim)
  • Diagonal metric: DiagEuclideanMetric(dim)
  • Dense metric: DenseEuclideanMetric(dim)

where dim is the dimensionality of the sampling space.

Integrator (integrator)

  • Ordinary leapfrog integrator: Leapfrog(ϵ)
  • Jittered leapfrog integrator with jitter rate n: JitteredLeapfrog(ϵ, n)
  • Tempered leapfrog integrator with tempering rate a: TemperedLeapfrog(ϵ, a)

where ϵ is the step size of leapfrog integration.

Proposal (proposal)

  • Static HMC with a fixed number of steps (n_steps) (Neal, R. M. (2011)): StaticTrajectory(integrator, n_steps)
  • HMC with a fixed total trajectory length (trajectory_length) (Neal, R. M. (2011)): HMCDA(integrator, trajectory_length)
  • Original NUTS with slice sampling (Hoffman, M. D., & Gelman, A. (2014)): NUTS{SliceTS,ClassicNoUTurn}(integrator)
  • Generalised NUTS with slice sampling (Betancourt, M. (2017)): NUTS{SliceTS,GeneralisedNoUTurn}(integrator)
  • Original NUTS with multinomial sampling (Betancourt, M. (2017)): NUTS{MultinomialTS,ClassicNoUTurn}(integrator)
  • Generalised NUTS with multinomial sampling (Betancourt, M. (2017)): NUTS{MultinomialTS,GeneralisedNoUTurn}(integrator)

Adaptor (adaptor)

  • Adapt the mass matrix metric of the Hamiltonian dynamics: mma = MassMatrixAdaptor(metric)
    • This is lowered to UnitMassMatrix, WelfordVar or WelfordCov based on the type of the mass matrix metric
  • Adapt the step size of the leapfrog integrator integrator: ssa = StepSizeAdaptor(δ, integrator)
    • It uses Nesterov's dual averaging with δ as the target acceptance rate.
  • Combine the two above naively: NaiveHMCAdaptor(mma, ssa)
  • Combine the first two using Stan's windowed adaptation: StanHMCAdaptor(mma, ssa)

Gradients

AdvancedHMC supports both AD-based (Zygote, Tracker and ForwardDiff) and user-specified gradients. In order to use user-specified gradients, please replace ForwardDiff with ℓπ_grad in the Hamiltonian constructor, where the gradient function ℓπ_grad should return a tuple containing both the log-posterior and its gradient.

All the combinations are tested in this file except from using tempered leapfrog integrator together with adaptation, which we found unstable empirically.

The sample function signature in detail

function sample(
    rng::Union{AbstractRNG, AbstractVector{<:AbstractRNG}},
    h::Hamiltonian,
    κ::HMCKernel,
    θ::AbstractVector{<:AbstractFloat},
    n_samples::Int,
    adaptor::AbstractAdaptor=NoAdaptation(),
    n_adapts::Int=min(div(n_samples, 10), 1_000);
    drop_warmup=false,
    verbose::Bool=true,
    progress::Bool=false,
)

Draw n_samples samples using the proposal κ under the Hamiltonian system h

  • The randomness is controlled by rng.
    • If rng is not provided, GLOBAL_RNG will be used.
  • The initial point is given by θ.
  • The adaptor is set by adaptor, for which the default is no adaptation.
    • It will perform n_adapts steps of adaptation, for which the default is 1_000 or 10% of n_samples, whichever is lower.
  • drop_warmup specifies whether to drop samples.
  • verbose controls the verbosity.
  • progress controls whether to show the progress meter or not.

Note that the function signature of the sample function exported by AdvancedHMC.jl differs from the sample function used by Turing.jl. We refer to the documentation of Turing.jl for more details on the latter.

Citing AdvancedHMC.jl

If you use AdvancedHMC.jl for your own research, please consider citing the following publication:

Hong Ge, Kai Xu, and Zoubin Ghahramani: "Turing: a language for flexible probabilistic inference.", International Conference on Artificial Intelligence and Statistics, 2018. (abs, pdf, BibTeX)

References

  1. Neal, R. M. (2011). MCMC using Hamiltonian dynamics. Handbook of Markov chain Monte Carlo, 2(11), 2. (arXiv)

  2. Betancourt, M. (2017). A Conceptual Introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434.

  3. Girolami, M., & Calderhead, B. (2011). Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 123-214. (arXiv)

  4. Betancourt, M. J., Byrne, S., & Girolami, M. (2014). Optimizing the integrator step size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1411.6669.

  5. Betancourt, M. (2016). Identifying the optimal integration time in Hamiltonian Monte Carlo. arXiv preprint arXiv:1601.00225.

  6. Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623. (arXiv)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].