All Projects → bethgelab → adversarial-vision-challenge

bethgelab / adversarial-vision-challenge

Licence: other
NIPS Adversarial Vision Challenge

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to adversarial-vision-challenge

robust-local-lipschitz
A Closer Look at Accuracy vs. Robustness
Stars: ✭ 75 (+92.31%)
Mutual labels:  robustness, adversarial-examples
pre-training
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Stars: ✭ 90 (+130.77%)
Mutual labels:  robustness, adversarial-examples
eleanor
Code used during my Chaos Engineering and Resiliency Patterns talk.
Stars: ✭ 14 (-64.1%)
Mutual labels:  robustness
RayS
RayS: A Ray Searching Method for Hard-label Adversarial Attack (KDD2020)
Stars: ✭ 43 (+10.26%)
Mutual labels:  robustness
shortcut-perspective
Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)
Stars: ✭ 67 (+71.79%)
Mutual labels:  robustness
avc nips 2018
Code to reproduce the attacks and defenses for the entries "JeromeR" in the NIPS 2018 Adversarial Vision Challenge
Stars: ✭ 18 (-53.85%)
Mutual labels:  adversarial-examples
GROOT
[ICML 2021] A fast algorithm for fitting robust decision trees. http://proceedings.mlr.press/v139/vos21a.html
Stars: ✭ 15 (-61.54%)
Mutual labels:  adversarial-examples
TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+164.1%)
Mutual labels:  robustness
DiagnoseRE
Source code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (-41.03%)
Mutual labels:  robustness
robustness-vit
Contains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Stars: ✭ 78 (+100%)
Mutual labels:  robustness
awesome-machine-learning-reliability
A curated list of awesome resources regarding machine learning reliability.
Stars: ✭ 31 (-20.51%)
Mutual labels:  adversarial-examples
cycle-confusion
Code and models for ICCV2021 paper "Robust Object Detection via Instance-Level Temporal Cycle Confusion".
Stars: ✭ 67 (+71.79%)
Mutual labels:  robustness
adversarial-attacks
Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Stars: ✭ 90 (+130.77%)
Mutual labels:  adversarial-examples
perceptual-advex
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Stars: ✭ 44 (+12.82%)
Mutual labels:  robustness
Generalization-Causality
关于domain generalization,domain adaptation,causality,robutness,prompt,optimization,generative model各式各样研究的阅读笔记
Stars: ✭ 482 (+1135.9%)
Mutual labels:  robustness
Robust-Semantic-Segmentation
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)
Stars: ✭ 25 (-35.9%)
Mutual labels:  robustness
Metric Learning Adversarial Robustness
Code for NeurIPS 2019 Paper
Stars: ✭ 44 (+12.82%)
Mutual labels:  robustness
s-attack
[CVPR 2022] S-attack library. Official implementation of two papers "Vehicle trajectory prediction works, but not everywhere" and "Are socially-aware trajectory prediction models really socially-aware?".
Stars: ✭ 51 (+30.77%)
Mutual labels:  robustness
Adversarial-Distributional-Training
Adversarial Distributional Training (NeurIPS 2020)
Stars: ✭ 52 (+33.33%)
Mutual labels:  robustness
belay
Robust error-handling for Kotlin and Android
Stars: ✭ 35 (-10.26%)
Mutual labels:  robustness

NIPS Adversarial Vision Challenge

Build Status

Publication

https://arxiv.org/abs/1808.01976

Installation

To install the package simply run:

pip install adversarial-vision-challenge

This package contains helper functions to implement models and attacks that can be used with Python 2.7, 3.4, 3.5 and 3.6. Other Python versions might work as well. We recommend using Python 3!

Furthermore, this package also contains test scripts that should be used before submission to perform local tests of your model or attack. These test scripts are Python 3 only, because they depend on crowdai-repo2docker. See Running Tests Scripts section below for more detailed information.

Implementing a model

To run a model server, load your model and wrap it as a foolbox model. Then pass the foolbox model to the model_server function.

from adversarial_vision_challenge import model_server

foolbox_model = load_your_foolbox_model()
model_server(foolbox_model)

Implementing an attack

To run an attack, use the load_model method to get a model instance that is callable to get the predicted labels.

from adversarial_vision_challenge.utils import read_images, store_adversarial
from adversarial_vision_challenge.utils import load_model

model = load_model()

for (file_name, image, label) in read_images():
    # model is callable and returns the predicted class,
    # i.e. 0 <= model(image) < 200

    # run your adversarial attack
    adversarial = your_attack(model, image, label)

    # store the adversarial
    store_adversarial(file_name, adversarial)
    
 ### Running the tests scripts

Running Tests Scripts

The test scripts (running on your host machine) will need Python 3. Your model or attack running inside a docker container and using this package can use Python 2 or 3.

  • To test a model, run the following: avc-test-model .
  • To test an untargeted attack, run the following: avc-test-untargeted-attack .
  • To test an targeted attack, run the following: avc-test-targeted-attack .

within the folders you want to test.

In order for the attacks to work, your models / attack folders need to have the following structure:

FAQ

Can you recommend some papers to get more familiar with adversarial examples, attacks and the threat model considered in this NIPS competition?

Have a look at our reading list that summarizes papers relevant for this competition.

How can I cite the competition in my own work?

@inproceedings{adversarial_vision_challenge,
title = {Adversarial Vision Challenge},
author = {Brendel, Wieland and Rauber, Jonas and Kurakin, Alexey and Papernot, Nicolas and Veliqi, Behar and Salath{\'e}, Marcel and Mohanty, Sharada P and Bethge, Matthias},
booktitle = {32nd Conference on Neural Information Processing Systems (NIPS 2018) Competition Track},
year = {2018},
url = {https://arxiv.org/abs/1808.01976}
}

Why can I not pass bounds = (0, 1) when creating the foolbox model?

We expect that all models process images that have values between 0 and 255. Therefore, we enforce that the model bounds are set to (0, 255). If your model expects images with values between 0 and 1, you can just pass bounds=(0, 255) and preprocessing=(0, 255), then the Foolbox model wrapper will divide all inputs by 255. Alternatively, you can leave preprocessing at (0, 1) and change your model to expect values between 0 and 255.

How is the score for an individual model, attack, image calculated?

We normalize the pixel values of the clean image and the adversarial to be between 0 and 1 and then take the L2 norm of the perturbation (adverarial - clean) as if they were vectors.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].