All Projects → OFAI → BayesianNonparametrics.jl

OFAI / BayesianNonparametrics.jl

Licence: other
BayesianNonparametrics in julia

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to BayesianNonparametrics.jl

Deepbayes
Bayesian methods in deep learning Summer School
Stars: ✭ 15 (-50%)
Mutual labels:  bayesian-methods
Loo
loo R package for approximate leave-one-out cross-validation (LOO-CV) and Pareto smoothed importance sampling (PSIS)
Stars: ✭ 106 (+253.33%)
Mutual labels:  bayesian-methods
Dynamichmc.jl
Implementation of robust dynamic Hamiltonian Monte Carlo methods (NUTS) in Julia.
Stars: ✭ 172 (+473.33%)
Mutual labels:  bayesian-methods
Bayesian Machine Learning
Notebooks about Bayesian methods for machine learning
Stars: ✭ 1,202 (+3906.67%)
Mutual labels:  bayesian-methods
Nimble
The base NIMBLE package for R
Stars: ✭ 95 (+216.67%)
Mutual labels:  bayesian-methods
Rethinking Pyro
Statistical Rethinking with PyTorch and Pyro
Stars: ✭ 116 (+286.67%)
Mutual labels:  bayesian-methods
Probabilistic Programming And Bayesian Methods For Hackers
aka "Bayesian Methods for Hackers": An introduction to Bayesian methods + probabilistic programming with a computation/understanding-first, mathematics-second point of view. All in pure Python ;)
Stars: ✭ 23,912 (+79606.67%)
Mutual labels:  bayesian-methods
Pyemma
🚂 Python API for Emma's Markov Model Algorithms 🚂
Stars: ✭ 200 (+566.67%)
Mutual labels:  bayesian-methods
Toolbox
A Java Toolbox for Scalable Probabilistic Machine Learning
Stars: ✭ 105 (+250%)
Mutual labels:  bayesian-methods
Shinystan
shinystan R package and ShinyStan GUI
Stars: ✭ 172 (+473.33%)
Mutual labels:  bayesian-methods
Forneylab.jl
Julia package for automatically generating Bayesian inference algorithms through message passing on Forney-style factor graphs.
Stars: ✭ 87 (+190%)
Mutual labels:  bayesian-methods
Probflow
A Python package for building Bayesian models with TensorFlow or PyTorch
Stars: ✭ 95 (+216.67%)
Mutual labels:  bayesian-methods
Pints
Probabilistic Inference on Noisy Time Series
Stars: ✭ 119 (+296.67%)
Mutual labels:  bayesian-methods
Da Tutorials
Course on data assimilation (DA)
Stars: ✭ 43 (+43.33%)
Mutual labels:  bayesian-methods
Stan
Stan development repository. The master branch contains the current release. The develop branch contains the latest stable development. See the Developer Process Wiki for details.
Stars: ✭ 2,177 (+7156.67%)
Mutual labels:  bayesian-methods
Rhat ess
Rank-normalization, folding, and localization: An improved R-hat for assessing convergence of MCMC
Stars: ✭ 19 (-36.67%)
Mutual labels:  bayesian-methods
Pymc3 vs pystan
Personal project to compare hierarchical linear regression in PyMC3 and PyStan, as presented at http://pydata.org/london2016/schedule/presentation/30/ video: https://www.youtube.com/watch?v=Jb9eklfbDyg
Stars: ✭ 110 (+266.67%)
Mutual labels:  bayesian-methods
dmmclust
dmmclust is a package for clustering short texts, based on Yin and Wang (2014)
Stars: ✭ 23 (-23.33%)
Mutual labels:  dirichlet-process-mixtures
Dapper
Data Assimilation with Python: a Package for Experimental Research
Stars: ✭ 181 (+503.33%)
Mutual labels:  bayesian-methods
Uncertainty Metrics
An easy-to-use interface for measuring uncertainty and robustness.
Stars: ✭ 145 (+383.33%)
Mutual labels:  bayesian-methods

BayesianNonparametrics.jl

Build Status Coverage Status

BayesianNonparametrics is a Julia package implementing state-of-the-art Bayesian nonparametric models for medium-sized unsupervised problems. The software package brings Bayesian nonparametrics to non-specialists allowing the widespread use of Bayesian nonparametric models. Emphasis is put on consistency, performance and ease of use allowing easy access to Bayesian nonparametric models inside Julia.

BayesianNonparametrics allows you to

  • explain discrete or continous data using: Dirichlet Process Mixtures or Hierarchical Dirichlet Process Mixtures
  • analyse variable dependencies using: Variable Clustering Model
  • fit multivariate or univariate distributions for discrete or continous data with conjugate priors
  • compute point estimtates of Dirichlet Process Mixtures posterior samples

News

BayesianNonparametrics is Julia 0.7 / 1.0 compatible

Installation

You can install the package into your running Julia installation using Julia's package manager, i.e.

pkg> add BayesianNonparametrics

Documentation

Documentation is available in Markdown: documentation

Example

The following example illustrates the use of BayesianNonparametrics for clustering of continuous observations using a Dirichlet Process Mixture of Gaussians.

After loading the package:

using BayesianNonparametrics

we can generate a 2D synthetic dataset (or use a multivariate continuous dataset of interest)

(X, Y) = bloobs(randomize = false)

and construct the parameters of our base distribution:

μ0 = vec(mean(X, dims = 1))
κ0 = 5.0
ν0 = 9.0
Σ0 = cov(X)
H = WishartGaussian(μ0, κ0, ν0, Σ0)

After defining the base distribution we can specify the model:

model = DPM(H)

which is in this case a Dirichlet Process Mixture. Each model has to be initialised, one possible initialisation approach for Dirichlet Process Mixtures is a k-Means initialisation:

modelBuffer = init(X, model, KMeansInitialisation(k = 10))

The resulting buffer object can now be used to apply posterior inference on the model given X. In the following we apply Gibbs sampling for 500 iterations without burn in or thining:

models = train(modelBuffer, DPMHyperparam(), Gibbs(maxiter = 500))

You shoud see the progress of the sampling process in the command line. After applying Gibbs sampling, it is possible explore the posterior based on their posterior densities,

densities = map(m -> m.energy, models)

number of active components

activeComponents = map(m -> sum(m.weights .> 0), models)

or the groupings of the observations:

assignments = map(m -> m.assignments, models)

The following animation illustrates posterior samples obtained by a Dirichlet Process Mixture:

alt text

Alternatively, one can compute a point estimate based on the posterior similarity matrix:

A = reduce(hcat, assignments)
(N, D) = size(X)
PSM = ones(N, N)
M = size(A, 2)
for i in 1:N
  for j in 1:i-1
    PSM[i, j] = sum(A[i,:] .== A[j,:]) / M
    PSM[j, i] = PSM[i, j]
  end
end

and find the optimal partition which minimizes the lower bound of the variation of information:

mink = minimum(length(m.weights) for m in models)
maxk = maximum(length(m.weights) for m in models)
(peassignments, _) = pointestimate(PSM, method = :average, mink = mink, maxk = maxk)

The grouping wich minimizes the lower bound of the variation of information is illustrated in the following image: alt text

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].