All Projects → mgorinova → autoreparam

mgorinova / autoreparam

Licence: Apache-2.0 License
Automatic Reparameterisation of Probabilistic Programs

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to autoreparam

SIVI
Using neural network to build expressive hierarchical distribution; A variational method to accurately estimate posterior uncertainty; A fast and general method for Bayesian inference. (ICML 2018)
Stars: ✭ 49 (+68.97%)
Mutual labels:  mcmc, variational-inference
Pymc3
Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Aesara
Stars: ✭ 6,214 (+21327.59%)
Mutual labels:  mcmc, variational-inference
cmdstanr
CmdStanR: the R interface to CmdStan
Stars: ✭ 82 (+182.76%)
Mutual labels:  mcmc, variational-inference
Bayesian Neural Networks
Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace, SG-HMC and more
Stars: ✭ 900 (+3003.45%)
Mutual labels:  mcmc, variational-inference
Deep Generative Models For Natural Language Processing
DGMs for NLP. A roadmap.
Stars: ✭ 185 (+537.93%)
Mutual labels:  mcmc, variational-inference
Gpstuff
GPstuff - Gaussian process models for Bayesian analysis
Stars: ✭ 106 (+265.52%)
Mutual labels:  mcmc, variational-inference
Boltzmann Machines
Boltzmann Machines in TensorFlow with examples
Stars: ✭ 768 (+2548.28%)
Mutual labels:  mcmc, variational-inference
Probabilistic Models
Collection of probabilistic models and inference algorithms
Stars: ✭ 217 (+648.28%)
Mutual labels:  mcmc, variational-inference
rss
Regression with Summary Statistics.
Stars: ✭ 42 (+44.83%)
Mutual labels:  mcmc, variational-inference
reinforcement-learning-papers
My notes on reinforcement learning papers
Stars: ✭ 13 (-55.17%)
Mutual labels:  papers
BayesHMM
Full Bayesian Inference for Hidden Markov Models
Stars: ✭ 35 (+20.69%)
Mutual labels:  mcmc
my-bookshelf
Collection of books/papers that I've read/I'm going to read/I would remember that they exist/It is unlikely that I'll read/I'll never read.
Stars: ✭ 49 (+68.97%)
Mutual labels:  papers
awesome-quant-papers
This repository hosts my reading notes for academic papers.
Stars: ✭ 28 (-3.45%)
Mutual labels:  papers
VOS-Paper-List
Semi-Supervised Video Object Segmentation(VOS) Paper List
Stars: ✭ 28 (-3.45%)
Mutual labels:  papers
mc3
Python MCMC Sampler
Stars: ✭ 25 (-13.79%)
Mutual labels:  mcmc
papis-zotero
Zotero compatiblity scripts for papis
Stars: ✭ 29 (+0%)
Mutual labels:  papers
Data-Science-and-Machine-Learning-Resources
List of Data Science and Machine Learning Resource that I frequently use
Stars: ✭ 19 (-34.48%)
Mutual labels:  papers
Tire-a-part
Digital repository for the papers of a research organization.
Stars: ✭ 24 (-17.24%)
Mutual labels:  papers
uncertainty-calibration
A collection of research and application papers of (uncertainty) calibration techniques.
Stars: ✭ 120 (+313.79%)
Mutual labels:  papers
ladder-vae-pytorch
Ladder Variational Autoencoders (LVAE) in PyTorch
Stars: ✭ 59 (+103.45%)
Mutual labels:  variational-inference

Automatic Reparameterisation of Probabilistic Programs

This repository contains code associated with the paper:

M. I. Gorinova, D. Moore, and M. D. Hoffman. Automatic Reparameterisation of Probabilistic Programs. 2019.

Usage

The script main.py is the main entry point. For example, to evaluate the German credit model with four leapfrog steps per sample, you might run:

# Run variational inference to get step sizes and initialization.
python main.py --model=german_credit_lognormalcentered --inference=VI --method=CP --num_optimization_steps=3000 --results_dir=./results/
# Run HMC to sample from the posterior
python main.py --model=german_credit_lognormalcentered --inference=HMC --method=CP --num_leapfrog_steps=4 --num_samples=50000 --num_burnin_steps=10000 --num_adaptation_steps=6000 --results_dir=./results/

Available options are:

  • method: CP, NCP, cVIP, dVIP, i. Note that i is only available when inference is set to HMC.
  • inference: VI or HMC. VI needs to be run first for every model in order for a log file to be created, which contains information such as initial step size to be adapted when running HMC.
  • model: radon_stddvs, radon, german_credit_lognormalcentered, german_credit_gammascale, 8schools, electric, election and time_series
  • dataset (used only for radon models): MA, IN, PA, MO, ND, MA, or AZ

To generate human-readable analysis, run

python analyze.py --elbos --all
python analyze.py --ess --all
python analyze.py --reparams --all
python analyze.py --elbos --model=8schools

The number of leapfrog steps will be automatically tuned if (1) no num_leapfrog_steps argument is supplied and (2) no entry num_leapfrog_steps exists in the respective .json file.

When the number of leapfrog steps is tuned, the best number of leapfrog steps is recorded in a .json file, so that it can be reused accordingly.

This code has been tested with TensorFlow 1.14 and TensorFlow Probability 0.7.0.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].