All Projects → ricvolpi → domain-shift-robustness

ricvolpi / domain-shift-robustness

Licence: MIT License
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to domain-shift-robustness

procedural-advml
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (+113.64%)
Mutual labels:  adversarial-attacks, black-box-attacks
square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+304.55%)
Mutual labels:  adversarial-attacks, black-box-attacks
AWP
Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Stars: ✭ 114 (+418.18%)
Mutual labels:  adversarial-attacks, adversarial-training
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+163.64%)
Mutual labels:  adversarial-attacks, adversarial-training
sparse-rs
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
Stars: ✭ 24 (+9.09%)
Mutual labels:  adversarial-attacks, black-box-attacks
ijcnn19attacks
Adversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (+159.09%)
Mutual labels:  adversarial-attacks
Robust-Semantic-Segmentation
Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic Segmentation (ICCV2021)
Stars: ✭ 25 (+13.64%)
Mutual labels:  adversarial-training
gans-in-action
"GAN 인 액션"(한빛미디어, 2020)의 코드 저장소입니다.
Stars: ✭ 29 (+31.82%)
Mutual labels:  adversarial-attacks
FeatureScatter
Feature Scattering Adversarial Training
Stars: ✭ 64 (+190.91%)
Mutual labels:  adversarial-training
nn robustness analysis
Python tools for analyzing the robustness properties of neural networks (NNs) from MIT ACL
Stars: ✭ 36 (+63.64%)
Mutual labels:  adversarial-attacks
Adversarial-Distributional-Training
Adversarial Distributional Training (NeurIPS 2020)
Stars: ✭ 52 (+136.36%)
Mutual labels:  adversarial-training
perceptual-advex
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Stars: ✭ 44 (+100%)
Mutual labels:  adversarial-attacks
Attack-ImageNet
No.2 solution of Tianchi ImageNet Adversarial Attack Challenge.
Stars: ✭ 41 (+86.36%)
Mutual labels:  adversarial-attacks
DiagnoseRE
Source code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (+4.55%)
Mutual labels:  adversarial-attacks
code-soup
This is a collection of algorithms and approaches used in the book adversarial deep learning
Stars: ✭ 18 (-18.18%)
Mutual labels:  adversarial-attacks
robust-ood-detection
Robust Out-of-distribution Detection in Neural Networks
Stars: ✭ 55 (+150%)
Mutual labels:  adversarial-attacks
mrqa
Code for EMNLP-IJCNLP 2019 MRQA Workshop Paper: "Domain-agnostic Question-Answering with Adversarial Training"
Stars: ✭ 35 (+59.09%)
Mutual labels:  domain-generalization
cool-papers-in-pytorch
Reimplementing cool papers in PyTorch...
Stars: ✭ 21 (-4.55%)
Mutual labels:  adversarial-attacks
consistency-adversarial
Consistency Regularization for Adversarial Robustness (AAAI 2022)
Stars: ✭ 37 (+68.18%)
Mutual labels:  adversarial-training
foofah
Foofah: programming-by-example data transformation program synthesizer
Stars: ✭ 24 (+9.09%)
Mutual labels:  combinatorial-search

Code for the paper Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets

Overview

The code in this repo allows

  1. Testing the vulnerability of (black-box) models via random search and evolution search over arbitrary transformation sets.

  2. Training more robust models via the RDA/RSDA/ESDA algorithms presented in the paper.

Here a small ConvNet and the MNIST dataset are used, but applying these tools to arbitrary tasks/models is straightforward. Feel free to drop me a message if any feedback can be helpful.

Files

model.py: to build tf's graph

train_ops.py: train/test functions

search_ops.py: search algos (RS/ES from the paper)

transformations_ops.py: modules to build image transformation set and apply transformations

exp_config: config file with the hyperparameters

Some pretrained models are available in a heavier version of this repo.

Prerequisites

Python 2.7, Tensorflow 1.12.0

How it works

To obtain MNIST and SVHN dataset, run

mkdir data
python download_and_process_mnist.py
sh download_svhn.sh

To train the model, run

python main.py --mode=train_MODE --gpu=GPU_IDX --exp_dir=EXP_DIR

where MODE can be one of {ERM, RDA, RSDA, ESDA}, GPU_IDX is the index of the GPU to be used, and EXP_DIR is the folder containing the exp_config file.

To run evolution search (ES) or random search (RS) on a trained model, run

python main.py --mode=test_MODE --gpu=GPU_IDX --exp_dir=EXP_DIR

where MODE can be one of {RS, ES}. For ES, population size POP_SIZE and mutation rate ETA can be set as

python main.py --mode=test_ES --gpu=GPU_IDX --exp_dir=EXP_DIR --pop_size=POP_SIZE --mutation_rate=ETA

To test performance on all digit datasets (MNIST, SVHN, MNIST-M, SYN, USPS), run

python main.py --mode=test_all --gpu=GPU_IDX --exp_dir=EXP_DIR

Testing MNIST-M, SYN and USPS is commented out.

If one desires to include more transformations, or explore different intensity intervals, modifications to transformations_ops.py should be straightforward.

Reference

Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets
Riccardo Volpi and Vittorio Murino, ICCV 2019

    @InProceedings{Volpi_2019_ICCV,
    author = {Volpi, Riccardo and Murino, Vittorio},
    title = {Addressing Model Vulnerability to Distributional Shifts Over Image Transformation Sets},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {October},
    year = {2019}
    }
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].