All Projects → cassidylaidlaw → perceptual-advex

cassidylaidlaw / perceptual-advex

Licence: MIT license
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to perceptual-advex

square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+102.27%)
Mutual labels:  robustness, adversarial-attacks
s-attack
[CVPR 2022] S-attack library. Official implementation of two papers "Vehicle trajectory prediction works, but not everywhere" and "Are socially-aware trajectory prediction models really socially-aware?".
Stars: ✭ 51 (+15.91%)
Mutual labels:  robustness, adversarial-attacks
POPQORN
An Algorithm to Quantify Robustness of Recurrent Neural Networks
Stars: ✭ 44 (+0%)
Mutual labels:  robustness, adversarial-attacks
DiagnoseRE
Source code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (-47.73%)
Mutual labels:  robustness, adversarial-attacks
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-2.27%)
Mutual labels:  robustness, adversarial-attacks
TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+134.09%)
Mutual labels:  robustness, adversarial-attacks
Attack-ImageNet
No.2 solution of Tianchi ImageNet Adversarial Attack Challenge.
Stars: ✭ 41 (-6.82%)
Mutual labels:  imagenet, adversarial-attacks
NIPS-Global-Paper-Implementation-Challenge
Selective Classification For Deep Neural Networks.
Stars: ✭ 11 (-75%)
Mutual labels:  imagenet
robustness-vit
Contains code for the paper "Vision Transformers are Robust Learners" (AAAI 2022).
Stars: ✭ 78 (+77.27%)
Mutual labels:  robustness
alexnet-architecture.tensorflow
Unofficial TensorFlow implementation of "AlexNet" architecture.
Stars: ✭ 15 (-65.91%)
Mutual labels:  imagenet
image-classification
A collection of SOTA Image Classification Models in PyTorch
Stars: ✭ 70 (+59.09%)
Mutual labels:  imagenet
shortcut-perspective
Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)
Stars: ✭ 67 (+52.27%)
Mutual labels:  robustness
ijcnn19attacks
Adversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (+29.55%)
Mutual labels:  adversarial-attacks
lambda.pytorch
PyTorch implementation of Lambda Network and pretrained Lambda-ResNet
Stars: ✭ 54 (+22.73%)
Mutual labels:  imagenet
Tiny-Imagenet-200
🔬 Some personal research code on analyzing CNNs. Started with a thorough exploration of Stanford's Tiny-Imagenet-200 dataset.
Stars: ✭ 68 (+54.55%)
Mutual labels:  imagenet
Adversarial-Examples-in-PyTorch
Pytorch code to generate adversarial examples on mnist and ImageNet data.
Stars: ✭ 112 (+154.55%)
Mutual labels:  imagenet
pigallery
PiGallery: AI-powered Self-hosted Secure Multi-user Image Gallery and Detailed Image analysis using Machine Learning, EXIF Parsing and Geo Tagging
Stars: ✭ 35 (-20.45%)
Mutual labels:  imagenet
simpleAICV-pytorch-ImageNet-COCO-training
SimpleAICV:pytorch training example on ImageNet(ILSVRC2012)/COCO2017/VOC2007+2012 datasets.Include ResNet/DarkNet/RetinaNet/FCOS/CenterNet/TTFNet/YOLOv3/YOLOv4/YOLOv5/YOLOX.
Stars: ✭ 276 (+527.27%)
Mutual labels:  imagenet
code-soup
This is a collection of algorithms and approaches used in the book adversarial deep learning
Stars: ✭ 18 (-59.09%)
Mutual labels:  adversarial-attacks
tf-imagenet
TensorFlow ImageNet - Training and SOTA checkpoints
Stars: ✭ 50 (+13.64%)
Mutual labels:  imagenet

Perceptual Aversarial Robustness

This repository contains code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".

Installation

The code can be downloaded as this GitHub repository, which includes the scripts for running all experiments in the paper. Alternatively, it can be installed as a pip package, which includes the models, attacks, and other utilities.

As a repository

  1. Install Python 3.

  2. Clone the repository:

     git clone https://github.com/cassidylaidlaw/perceptual-advex.git
     cd perceptual-advex
    
  3. Install pip requirements:

     pip install -r requirements.txt
    

As a package

  1. Install Python 3.

  2. Install from PyPI:

     pip install perceptual-advex
    
  3. (Optional) Install AutoAttack if you want to use it with the package:

     pip install git+git://github.com/fra31/auto-attack#egg=autoattack
    
  4. Import the package as follows:

     from perceptual_advex.perceptual_attacks import FastLagrangePerceptualAttack
    

    See getting_started.ipynb or the Colab notebook below for examples of how to use the package.

Data and Pretrained Models

Download pretrained models from here.

Download perceptual study data from here.

Usage

This section explains how to get started with using the code and includes information about how to run all the experiments.

Getting Started

The getting_started.ipynb notebook shows how to load a pretrained model and construct perceptual adversarial examples for it. It is also available on Google Colab via the link below.

Open In Colab

Perceptual Adversarial Training (PAT)

The script adv_train.py can be used to perform Perceptual Adversarial Training (PAT) or to perform regular adversarial training. To train a ResNet-50 with self-bounded PAT on CIFAR-10:

python adv_train.py --batch_size 50 --arch resnet50 --dataset cifar --attack "FastLagrangePerceptualAttack(model, bound=0.5, num_iterations=10)" --only_attack_correct

This will create a directory data/logs, which will contain TensorBoard logs and checkpoints for each epoch.

To train a ResNet-50 with self-bounded PAT on ImageNet-100:

python adv_train.py --parallel 4 --batch_size 128 --dataset imagenet100 --dataset_path /path/to/ILSVRC2012 --arch resnet50 --attack "FastLagrangePerceptualAttack(model, bound=0.25, num_iterations=10)" --only_attack_correct

This assumes 4 GPUs are available; you can change the number with the --parallel argument. To train a ResNet-50 with AlexNet-bounded PAT on ImageNet-100, run:

python adv_train.py --parallel 4 --batch_size 128 --dataset imagenet100 --dataset_path /path/to/ILSVRC2012 --arch resnet50 --attack "FastLagrangePerceptualAttack(model, bound=0.25, num_iterations=10, lpips_model='alexnet')" --only_attack_correct

Generating Perceptual Adversarial Attacks

The script generate_examples.py will generate adversarially attacked images. For instance, to generate adversarial examples with the Perceptual Projected Gradient Descent (PPGD) and Lagrange Perceptual Attack (LPA) attacks on ImageNet, run:

python generate_examples.py --dataset imagenet --arch resnet50 --checkpoint pretrained --batch_size 20 --shuffle --layout horizontal_alternate --dataset_path /path/to/ILSVRC2012 --output examples.png \
"PerceptualPGDAttack(model, bound=0.5, num_iterations=40, lpips_model='alexnet')" \
"LagrangePerceptualAttack(model, bound=0.5, num_iterations=40, lpips_model='alexnet')"

This will create an image called examples.png with three columns. The first is the unmodified original images from the ImageNet test set. The second and third contain adversarial attacks and magnified difference from the originals for the PPGD and LPA attacks, respectively.

Arguments

  • --dataset can be cifar for CIFAR-10, imagenet100 for ImageNet-100, or imagenet for full ImageNet.
  • --arch can be resnet50 (or resnet34, etc.) or alexnet.
  • --checkpoint can be pretrained to use the pretrained torchvision model. Otherwise, it should be a path to a pretrained model, such as those from the robustness library.
  • --batch_size indicates how many images to attack.
  • --layout controls the layout of the resulting image. It can be vertical, vertical_alternate, or horizontal_alternate.
  • --output specifies where the resulting image should be stored.
  • The remainder of the arguments specify attacks using Python expressions. See the perceptual_advex.attacks and perceptual_advex.perceptual_attacks modules for a full list of available attacks and arguments for those attacks.

Evaluation

The script evaluate_trained_model.py evaluates a model against a set of attacks. The arguments are similar to generate_examples.py (see above). For instance, to evaluate the torchvision pretrained ResNet-50 against PPGD and LPA, run:

python evaluate_trained_model.py --dataset imagenet --arch resnet50 --checkpoint pretrained --batch_size 50 --dataset_path /path/to/ILSVRC2012 --output evaluation.csv \
"PerceptualPGDAttack(model, bound=0.5, num_iterations=40, lpips_model='alexnet')" \
"LagrangePerceptualAttack(model, bound=0.5, num_iterations=40, lpips_model='alexnet')"

CIFAR-10

The following command was used to evaluate CIFAR-10 classifiers for Tables 2, 6, 7, 8, and 9 in the paper:

python evaluate_trained_model.py --dataset cifar --checkpoint /path/to/checkpoint.pt --arch resnet50 --batch_size 100 --output evaluation.csv \
"NoAttack()" \
"AutoLinfAttack(model, 'cifar', bound=8/255)" \
"AutoL2Attack(model, 'cifar', bound=1)" \
"StAdvAttack(model, num_iterations=100)" \
"ReColorAdvAttack(model, num_iterations=100)" \
"PerceptualPGDAttack(model, num_iterations=40, bound=0.5, lpips_model='alexnet_cifar', projection='newtons')" \
"LagrangePerceptualAttack(model, num_iterations=40, bound=0.5, lpips_model='alexnet_cifar', projection='newtons')"

ImageNet-100

The following command was used to evaluate ImageNet-100 classifiers for Table 3 in the paper, which shows the robustness of various models against several attacks at the medium perceptibility bound:

python evaluate_trained_model.py --dataset imagenet100 --dataset_path /path/to/ILSVRC2012 --checkpoint /path/to/checkpoint.pt --arch resnet50 --batch_size 50 --output evaluation.csv \
"NoAttack()" \
"AutoLinfAttack(model, 'imagenet100', bound=4/255)" \
"AutoL2Attack(model, 'imagenet100', bound=1200/255)" \
"JPEGLinfAttack(model, 'imagenet100', bound=0.125, num_iterations=200)" \
"StAdvAttack(model, bound=0.05, num_iterations=200)" \
"ReColorAdvAttack(model, bound=0.06, num_iterations=200)" \
"PerceptualPGDAttack(model, bound=0.5, lpips_model='alexnet', num_iterations=40)" \
"LagrangePerceptualAttack(model, bound=0.5, lpips_model='alexnet', num_iterations=40)"

Citation

If you find this repository useful for your research, please cite our paper as follows:

@inproceedings{laidlaw2021perceptual,
  title={Perceptual Adversarial Robustness: Defense Against Unseen Threat Models},
  author={Laidlaw, Cassidy and Singla, Sahil and Feizi, Soheil},
  booktitle={ICLR},
  year={2021}
}

Contact

For questions about the paper or code, please contact [email protected].

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].