All Projects → martinferianc → BayesianNeuralNets

martinferianc / BayesianNeuralNets

Licence: Apache-2.0 license
Bayesian neural networks in PyTorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to BayesianNeuralNets

ArviZ.jl
Exploratory analysis of Bayesian models with Julia
Stars: ✭ 67 (+294.12%)
Mutual labels:  bayesian
awesome-agi-cocosci
An awesome & curated list for Artificial General Intelligence, an emerging inter-discipline field that combines artificial intelligence and computational cognitive sciences.
Stars: ✭ 81 (+376.47%)
Mutual labels:  bayesian
stan4bart
Uses Stan sampler and math library to semiparametrically fit linear and multilevel models with additive Bayesian Additive Regression Tree (BART) components.
Stars: ✭ 13 (-23.53%)
Mutual labels:  bayesian
BayesianSocialScience
사회과학자를 위한 데이터과학 방법론 (코드 저장소)
Stars: ✭ 22 (+29.41%)
Mutual labels:  bayesian
pytorch ard
Pytorch implementation of Variational Dropout Sparsifies Deep Neural Networks
Stars: ✭ 76 (+347.06%)
Mutual labels:  bayesian-neural-networks
BayesByHypernet
Code for the paper Implicit Weight Uncertainty in Neural Networks
Stars: ✭ 63 (+270.59%)
Mutual labels:  bayesian-neural-networks
DUN
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Stars: ✭ 65 (+282.35%)
Mutual labels:  bayesian-neural-networks
MLExp
arxiv.org/abs/1801.07710v2
Stars: ✭ 32 (+88.24%)
Mutual labels:  bayesian
tukey
Mini stats toolkit for Clojure/Script
Stars: ✭ 17 (+0%)
Mutual labels:  bayesian
BAS
BAS R package https://merliseclyde.github.io/BAS/
Stars: ✭ 36 (+111.76%)
Mutual labels:  bayesian
pymc3-hmm
Hidden Markov models in PyMC3
Stars: ✭ 81 (+376.47%)
Mutual labels:  bayesian
LogDensityProblems.jl
A common framework for implementing and using log densities for inference.
Stars: ✭ 26 (+52.94%)
Mutual labels:  bayesian
UString
[ACM MM 2020] Uncertainty-based Traffic Accident Anticipation
Stars: ✭ 38 (+123.53%)
Mutual labels:  bayesian-neural-networks
MultiBUGS
Multi-core BUGS for fast Bayesian inference of large hierarchical models
Stars: ✭ 28 (+64.71%)
Mutual labels:  bayesian
artificial neural networks
A collection of Methods and Models for various architectures of Artificial Neural Networks
Stars: ✭ 40 (+135.29%)
Mutual labels:  bayesian-neural-networks
MTfit
MTfit code for Bayesian Moment Tensor Fitting
Stars: ✭ 61 (+258.82%)
Mutual labels:  bayesian
DynamicHMCExamples.jl
Examples for Bayesian inference using DynamicHMC.jl and related packages.
Stars: ✭ 33 (+94.12%)
Mutual labels:  bayesian
models-by-example
By-hand code for models and algorithms. An update to the 'Miscellaneous-R-Code' repo.
Stars: ✭ 43 (+152.94%)
Mutual labels:  bayesian
phyr
Functions for phylogenetic analyses
Stars: ✭ 23 (+35.29%)
Mutual labels:  bayesian
stantargets
Reproducible Bayesian data analysis pipelines with targets and cmdstanr
Stars: ✭ 31 (+82.35%)
Mutual labels:  bayesian

Bayesian neural networks

This is a repository that demonstrates the most popular variants and flavors of Bayesian neural networks and applies them to a number of tasks (binary classification, regression and MNIST classification) through number of neural network architectures (Feed-forward/convoltuional) and compares their performance to identical plain pointwise methods.

Description

Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost [Gal et al., 2015].

In this repository, I demonstrate capabilities of multiple methods that introduce Bayesanity and uncertainty quantification to standard neural networks on multiple tasks. The tasks include binary classification, regression and classification of MNIST digits under rotation. Each method/architecture is benchmarked against its pointwise counterpart that has been hand-tuned for best performance.

Bayesian methods: The implemented Bayesian methods are Bayes-by-backprop (with an added lower-variance local reparametrization trick), Monte Carlo Dropout and Stochastic gradient Langevin Dynamics (with pre-conditioning).

Each method has its own separate model definition or an optimiser for clarity and they are benchmarked undeer the same settings. The settings were hand-tuned, so if you find better ones definitely let me know.

Structure

   .
   |-experiments            # Where all the experiments are located
   |---data                 # Where all the data is stored
   |---scripts              # Pre-configured scripts for running the experiments
   |-src                    # Main source folder, also containing the model descriptions
   |---models
   |-----pointwise
   |-----stochastic
   |-------bbb              # Implementation of Bayes-by-backprop with local reparametrisation trick
   |-------mc_dropout       # Monte carlo dropout
   |-------sgld             # Stochastic gradient langevin Dynamics with and without pre-conditioning
   |-figs                   # Sample figures, corresponding to the collected results for the respective methods (collected and transferred manually)

Experiments

There are in total three different experiments, regression, binary classification and MNIST digit classification. The default runners for the experiments are under the experiments folder, where the scripts for easy pre-configured runs can be found under experiments/scripts/.

To run the experiments, simply prepare youself a a virtual environment and navigate to the experiments/scripts folder and pick one of the methods/tasks and run it as:

python3 bbb_binary_classification.py

No additional tuning should be necessary. However, the scripts assume that you have a GPU available for training and inference. In case you just wanto to use CPU, do:

python3 bbb_binary_classification.py --gpu -1

Results

These are the results that you can expect with pre-configured scripts for binary classification:

Pointwise MC Dropout BBB SGLD

These are the results that you can expect with pre-configured scripts for regression:

Pointwise MC Dropout BBB SGLD

These are the results that you can expect with pre-configured scripts for MNIST classification when we have a rotated digit:

Rotated digit one
Pointwise MC Dropout BBB SGLD

and these are the results of confidence on out-of-distributions samples of a random Gaussian noise with zero mean and unit variance

Pointwise MC Dropout BBB SGLD

Requirements

The main requirement is PyTorch>=1.5.0.

To be able to run this project and install the requirements simply execute (will work for GPUs or CPUs):

git clone https://github.com/martinferianc/BayesianNeuralNet-Tutorial
conda create --name venv --file requirements.txt --python=python3.6
conda activate venv

Then all the experiments are in the experiments folder in the main directory of this repository

Contributing

This is a hobby project and if you would like to contribute more methods/tasks etc. Simply open up a pull-request and we can add your code in and you can become a contributor (I am really happy to share any credits, because maintaining this actively is going to be beyond my control).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].