All Projects → francescoalemanno → KissABC.jl

francescoalemanno / KissABC.jl

Licence: MIT license
Pure julia implementation of Multiple Affine Invariant Sampling for efficient Approximate Bayesian Computation

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to KissABC.jl

DynamicHMCExamples.jl
Examples for Bayesian inference using DynamicHMC.jl and related packages.
Stars: ✭ 33 (+17.86%)
Mutual labels:  julia-language, bayesian-inference, bayesian-data-analysis
Rstan
RStan, the R interface to Stan
Stars: ✭ 760 (+2614.29%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
DUN
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Stars: ✭ 65 (+132.14%)
Mutual labels:  bayesian-inference, probabilistic-inference
Pymc Example Project
Example PyMC3 project for performing Bayesian data analysis using a probabilistic programming approach to machine learning.
Stars: ✭ 90 (+221.43%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Bayesian Analysis Recipes
A collection of Bayesian data analysis recipes using PyMC3
Stars: ✭ 479 (+1610.71%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Bayesian Stats Modelling Tutorial
How to do Bayesian statistical modelling using numpy and PyMC3
Stars: ✭ 480 (+1614.29%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Rhat ess
Rank-normalization, folding, and localization: An improved R-hat for assessing convergence of MCMC
Stars: ✭ 19 (-32.14%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Birch
A probabilistic programming language that combines automatic differentiation, automatic marginalization, and automatic conditioning within Monte Carlo methods.
Stars: ✭ 80 (+185.71%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Shinystan
shinystan R package and ShinyStan GUI
Stars: ✭ 172 (+514.29%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Stan
Stan development repository. The master branch contains the current release. The develop branch contains the latest stable development. See the Developer Process Wiki for details.
Stars: ✭ 2,177 (+7675%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Bda r demos
Bayesian Data Analysis demos for R
Stars: ✭ 409 (+1360.71%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
EmbracingUncertainty
Material for AMLD 2020 workshop "Bayesian Inference: embracing uncertainty"
Stars: ✭ 23 (-17.86%)
Mutual labels:  julia-language, bayesian-inference
Bayadera
High-performance Bayesian Data Analysis on the GPU in Clojure
Stars: ✭ 342 (+1121.43%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Dbda Python
Doing Bayesian Data Analysis, 2nd Edition (Kruschke, 2015): Python/PyMC3 code
Stars: ✭ 502 (+1692.86%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Rstanarm
rstanarm R package for Bayesian applied regression modeling
Stars: ✭ 285 (+917.86%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Bda py demos
Bayesian Data Analysis demos for Python
Stars: ✭ 781 (+2689.29%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
LogDensityProblems.jl
A common framework for implementing and using log densities for inference.
Stars: ✭ 26 (-7.14%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
stan-ja
Stanマニュアルの日本語への翻訳プロジェクト
Stars: ✭ 53 (+89.29%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
Rethinking Pyro
Statistical Rethinking with PyTorch and Pyro
Stars: ✭ 116 (+314.29%)
Mutual labels:  bayesian-inference, bayesian-data-analysis
TuringBnpBenchmarks
Benchmarks of Bayesian Nonparametric models in Turing and other PPLs
Stars: ✭ 24 (-14.29%)
Mutual labels:  julia-language, bayesian-inference

KissABC 3.0

CI Coverage Dev Stable

Table of Contents

Release Notes

  • 3.0: Added SMC algorithm callable with smc for efficient Approximate bayesian computation, the speedup is 20X for reaching the same epsilon tolerance. Removed AISChain return type in favour of MonteCarloMeasurements particle, this change allows immediate use of the inference results for further processing.

Usage guide

The ingredients you need to use Approximate Bayesian Computation:

  1. A simulation which depends on some parameters, able to generate datasets similar to your target dataset if parameters are tuned
  2. A prior distribution over such parameters
  3. A distance function to compare generated dataset to the true dataset

We will start with a simple example, we have a dataset generated according to an Normal distribution whose parameters are unknown

tdata=randn(1000).*0.04.+2

we are ofcourse able to simulate normal random numbers, so this constitutes our simulation

sim((μ,σ)) = randn(1000) .* σ .+ μ

The second ingredient is a prior over the parameters μ and σ

using KissABC
prior=Factored(Uniform(1,3), Truncated(Normal(0,0.1), 0, 100))

we have chosen a uniform distribution over the interval [1,3] for μ and a normal distribution truncated over ℝ⁺ for σ.

Now all that we need is a distance function to compare the true dataset to the simulated dataset, for this purpose comparing mean and variance is optimal,

function cost((μ,σ)) 
    x=sim((μ,σ))
    y=tdata
    d1 = mean(x) - mean(y)
    d2 = std(x) - std(y)
    hypot(d1, d2 * 50)
end

Now we are all set, we can use AIS which is an Affine Invariant MC algorithm via the sample function, to simulate the posterior distribution for this model, inferring μ and σ

approx_density = ApproxKernelizedPosterior(prior,cost,0.005)
res = sample(approx_density,AIS(10),1000,ntransitions=100)

the repl output is:

Sampling 100%|██████████████████████████████████████████████████| Time: 0:00:02
2-element Array{Particles{Float64,1000},1}:
 2.0 ± 0.018
 0.0395 ± 0.00093

We chose a tolerance on distances equal to 0.005, a number of particles equal to 10, we chose a number of steps per sample ntransitions = 100 and we acquired 1000 samples. For comparison let's extract some prior samples

prsample=[rand(prior) for i in 1:5000] #some samples from the prior for comparison

plotting prior and posterior side by side we get:

plots of the Inference Results

we can see that the algorithm has correctly inferred both parameters, this exact recipe will work for much more complicated models and simulations, with some tuning.

to this same problem we can perhaps even more easily apply smc, a more advanced adaptive sequential monte carlo method

julia> smc(prior,cost)
(P = Particles{Float64,79}[2.0 ± 0.0062, 0.0401 ± 0.00081], W = 0.0127, ϵ = 0.011113205245491245)

to know how to tune the configuration defaults of smc, consult the docs :) for more example look at the examples folder.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].