All Projects → luizgh → avc_nips_2018

luizgh / avc_nips_2018

Licence: other
Code to reproduce the attacks and defenses for the entries "JeromeR" in the NIPS 2018 Adversarial Vision Challenge

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to avc nips 2018

adv-dnn-ens-malware
adversarial examples, adversarial malware examples, adversarial malware detection, adversarial deep ensemble, Android malware variants
Stars: ✭ 33 (+83.33%)
Mutual labels:  adversarial-examples
procedural-advml
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (+161.11%)
Mutual labels:  adversarial-examples
RobustTrees
[ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples
Stars: ✭ 62 (+244.44%)
Mutual labels:  adversarial-examples
denoised-smoothing
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Stars: ✭ 82 (+355.56%)
Mutual labels:  adversarial-examples
rs4a
Randomized Smoothing of All Shapes and Sizes (ICML 2020).
Stars: ✭ 47 (+161.11%)
Mutual labels:  adversarial-examples
generative adversary
Code for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (+222.22%)
Mutual labels:  adversarial-examples
pre-training
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Stars: ✭ 90 (+400%)
Mutual labels:  adversarial-examples
FGSM-Keras
Implemention of Fast Gradient Sign Method for generating adversarial examples in Keras
Stars: ✭ 43 (+138.89%)
Mutual labels:  adversarial-examples
Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+14555.56%)
Mutual labels:  adversarial-examples
Foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+11611.11%)
Mutual labels:  adversarial-examples
Adversarial-Examples-Paper
Paper list of Adversarial Examples
Stars: ✭ 20 (+11.11%)
Mutual labels:  adversarial-examples
adaptive-segmentation-mask-attack
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).
Stars: ✭ 50 (+177.78%)
Mutual labels:  adversarial-examples
adversarial-vision-challenge
NIPS Adversarial Vision Challenge
Stars: ✭ 39 (+116.67%)
Mutual labels:  adversarial-examples
awesome-machine-learning-reliability
A curated list of awesome resources regarding machine learning reliability.
Stars: ✭ 31 (+72.22%)
Mutual labels:  adversarial-examples
tulip
Scaleable input gradient regularization
Stars: ✭ 19 (+5.56%)
Mutual labels:  adversarial-examples
GROOT
[ICML 2021] A fast algorithm for fitting robust decision trees. http://proceedings.mlr.press/v139/vos21a.html
Stars: ✭ 15 (-16.67%)
Mutual labels:  adversarial-examples
ijcnn19attacks
Adversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (+216.67%)
Mutual labels:  adversarial-examples
robust-local-lipschitz
A Closer Look at Accuracy vs. Robustness
Stars: ✭ 75 (+316.67%)
Mutual labels:  adversarial-examples
adversarial-attacks
Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Stars: ✭ 90 (+400%)
Mutual labels:  adversarial-examples

NIPS 2018 Adversarial Vision Challenge

Code to reproduce the attacks and defenses for the entries "JeromeR" in the NIPS 2018 Adversarial Vision Challenge (1st place on Untargeted attacks, 3rd place on Robust models and Targeted attacks)

Team name: LIVIA - ETS Montreal

Team members: Jérôme Rony, Luiz Gustavo Hafemann

Overview

Defense: We trained a robust model with a new iterative gradient-based L2 attack that we propose (Decoupled Direction and Norm — DDN), that is fast enough to be used during training. In each training step, we find an adversarial example (using DDN) that is close to the decision boundary, and minimize the cross-entropy of this example. There is no change to the model architecture, nor any impact on inference time.

Attacks: Our attack is based on a collection of surrogate models (including robust models trained with DDN). For each model, we select two directions to attack: the gradient of the cross entropy loss for the original class, and the direction given by running the DDN attack. For each direction, we do a binary search on the norm to find the decision boundary. We take the best attack and refine it with a Boundary attack.

For more information on the DDN attack, refer to the paper, and implementation:

[1]Jérôme Rony, Luiz G. Hafemann, Luiz S. Oliveira, Ismail Ben Ayed, Robert Sabourin and Eric Granger "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses", arXiv:1811.09600

Installation

Clone this repository and install the dependencies by running pip install -r requirements.txt

Download the TinyImagenet dataset and extract it:

tar xvf tiny-imagenet-pytorch.tar.gz -C data

Optional: download trained models: resnext50_ddn (our robust model), resnet18_clean (not adversarially trained).

Training a model

Adversarially train a model (using the DDN attack) starting from an imagenet-pretrained resnext50_32x4d :

python train_tiny_imagenet_ddn.py data --sf tiny_ddn --adv --max-norm 1 --arch resnext50_32x4d --pretrained

For monitoring training, you can start a visdom server, and then add the argument --visdom-port <port> to the command above:

python -m visdom.server -port <port>

Running the attack

See "attack_example.py" for an example of the attack. If you downloaded the models from the Installation section, you can run the following code:

python attack_example.py --m resnet18_clean.pt --sm resnext50_32x4d_ddn.pt

This will create an attack against a resnet18 model, using an adversarially trained surrogate model.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].