All Projects → YingzhenLi → Dropout_BBalpha

YingzhenLi / Dropout_BBalpha

Licence: MIT license
Implementations of the ICML 2017 paper (with Yarin Gal)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Dropout BBalpha

Hbayesdm
Hierarchical Bayesian modeling of RLDM tasks, using R & Python
Stars: ✭ 124 (+210%)
Mutual labels:  bayesian
Stan
Stan development repository. The master branch contains the current release. The develop branch contains the latest stable development. See the Developer Process Wiki for details.
Stars: ✭ 2,177 (+5342.5%)
Mutual labels:  bayesian
Probabilistic Models
Collection of probabilistic models and inference algorithms
Stars: ✭ 217 (+442.5%)
Mutual labels:  bayesian
Pecan
The Predictive Ecosystem Analyzer (PEcAn) is an integrated ecological bioinformatics toolbox.
Stars: ✭ 132 (+230%)
Mutual labels:  bayesian
Shinystan
shinystan R package and ShinyStan GUI
Stars: ✭ 172 (+330%)
Mutual labels:  bayesian
Pygpgo
Bayesian optimization for Python
Stars: ✭ 196 (+390%)
Mutual labels:  bayesian
Bayesiantracker
Bayesian multi-object tracking
Stars: ✭ 121 (+202.5%)
Mutual labels:  bayesian
spatial-smoothing
(ICML 2022) Official PyTorch implementation of “Blurs Behave Like Ensembles: Spatial Smoothings to Improve Accuracy, Uncertainty, and Robustness”.
Stars: ✭ 68 (+70%)
Mutual labels:  bayesian-deep-learning
Dynamichmc.jl
Implementation of robust dynamic Hamiltonian Monte Carlo methods (NUTS) in Julia.
Stars: ✭ 172 (+330%)
Mutual labels:  bayesian
Bayesiandeeplearning Survey
Bayesian Deep Learning: A Survey
Stars: ✭ 214 (+435%)
Mutual labels:  bayesian
Modelselection
Tutorial on model assessment, model selection and inference after model selection
Stars: ✭ 139 (+247.5%)
Mutual labels:  bayesian
Naive Bayes Classifier
yet another general purpose naive bayesian classifier.
Stars: ✭ 162 (+305%)
Mutual labels:  bayesian
Cornell Moe
A Python library for the state-of-the-art Bayesian optimization algorithms, with the core implemented in C++.
Stars: ✭ 198 (+395%)
Mutual labels:  bayesian
Dl Uncertainty
"What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?", NIPS 2017 (unofficial code).
Stars: ✭ 130 (+225%)
Mutual labels:  bayesian
pytorch-convcnp
A PyTorch Implementation of Convolutional Conditional Neural Process.
Stars: ✭ 41 (+2.5%)
Mutual labels:  bayesian-deep-learning
Statistical Rethinking
An interactive online reading of McElreath's Statistical Rethinking
Stars: ✭ 123 (+207.5%)
Mutual labels:  bayesian
Correlation
🔗 Methods for Correlation Analysis
Stars: ✭ 192 (+380%)
Mutual labels:  bayesian
probai-2019
Materials of the Nordic Probabilistic AI School 2019.
Stars: ✭ 127 (+217.5%)
Mutual labels:  bayesian
approxposterior
A Python package for approximate Bayesian inference and optimization using Gaussian processes
Stars: ✭ 36 (-10%)
Mutual labels:  approximate-inference
Elfi
ELFI - Engine for Likelihood-Free Inference
Stars: ✭ 208 (+420%)
Mutual labels:  bayesian

Dropout + BB-alpha for detecting adversarial examples

Thank you for your interest in our paper:

Yingzhen Li and Yarin Gal

Dropout inference in Bayesian neural networks with alpha-divergences

International Conference on Machine Learning (ICML), 2017

Please consider citing the paper when any of the material is used for your research.

Contributions: Yarin wrote most of the functions in BBalpha_dropout.py, and Yingzhen (me) derived the loss function and implemented the adversarial attack experiments.

how to use this code for your research

I've got quite a few emails on how to incorporate our method into their Keras code. Thus here I also provide a template file, and you can follow the comments inside to plugin your favourate model and dropout method.

template file: template_model.py

repreducing the adversarial attack example

We also provide the adversarial attack detection codes. The attack implementation was adapted from the cleverhans toolbox (version 1.0), and I rewrote the targeted attack to make it an iterative method.

To reproduce the experiments, first train a model on mnist:

python train_model.py K_mc alpha nb_layers nb_units p model_arch

with K_mc the number of MC samples for training, nb_layers the number of layers of the NN, nb_units the number of hidden units in each hidden layer, p the dropout rate (between 0 and 1), and model_arch = mlp or cnn

This will train a model on MNIST data for 500 iterations and save the model. Then to test the FGSM attack, run

python adversarial_test.py

and change the settings in that python file to pick a saved model for testing. If wanted to see targeted attack, run instead

python adversarial_test_targeted.py

Both files will produce a png file visualising the accuracy, predictive entropy, and some samples of the adversarial image (aligned with the x-axis in the plots).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].