All Projects → csdongxian → skip-connections-matter

csdongxian / skip-connections-matter

Licence: MIT license
Codes for ICLR 2020 paper "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to skip-connections-matter

temporal-depth-segmentation
Source code (train/test) accompanying the paper entitled "Veritatem Dies Aperit - Temporally Consistent Depth Prediction Enabled by a Multi-Task Geometric and Semantic Scene Understanding Approach" in CVPR 2019 (https://arxiv.org/abs/1903.10764).
Stars: ✭ 20 (-67.21%)
Mutual labels:  skip-connections
TailCalibX
Pytorch implementation of Feature Generation for Long-Tail Classification by Rahul Vigneswaran, Marc T Law, Vineeth N Balasubramaniam and Makarand Tapaswi
Stars: ✭ 32 (-47.54%)
Mutual labels:  iclr
deep-weight-prior
The Deep Weight Prior, ICLR 2019
Stars: ✭ 42 (-31.15%)
Mutual labels:  iclr
hierarchical-dnn-interpretations
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Stars: ✭ 110 (+80.33%)
Mutual labels:  iclr
CGCF-ConfGen
🧪 Learning Neural Generative Dynamics for Molecular Conformation Generation (ICLR 2021)
Stars: ✭ 41 (-32.79%)
Mutual labels:  iclr
WhitenBlackBox
Towards Reverse-Engineering Black-Box Neural Networks, ICLR'18
Stars: ✭ 47 (-22.95%)
Mutual labels:  iclr
icml-nips-iclr-dataset
Papers, authors and author affiliations from ICML, NeurIPS and ICLR 2006-2021
Stars: ✭ 21 (-65.57%)
Mutual labels:  iclr
Awesome-Computer-Vision-Paper-List
This repository contains all the papers accepted in top conference of computer vision, with convenience to search related papers.
Stars: ✭ 248 (+306.56%)
Mutual labels:  iclr
cool-papers-in-pytorch
Reimplementing cool papers in PyTorch...
Stars: ✭ 21 (-65.57%)
Mutual labels:  iclr

Skip Connections Matter

This repository contains the code for Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets (ICLR 2020 Spotlight).

News

  • 11/18/2020 - We released more codes and the dataset used in our paper(a subset with 5000 images from ImageNet), to help reproduce the reported results in our paper.
  • 02/20/2020 - arXiv posted and repository released.

Method

We propose the Skip Gradient Method (SGM) to generate adversarial examples using gradients more from the skip connections rather than the residual modules. In particular, SGM utilizes a decay factor (gamma) to reduce gradients from the residual modules,

Requisite

This code is implemented in PyTorch, and we have tested the code under the following environment settings:

  • python = 3.7.6
  • torch = 1.7.0
  • torchvision = 0.8.1
  • advertorch = 0.2.2
  • pretrainedmodels = 0.7.4

Run the code

  1. Download the dataset from Google Drive or Baidu Drive (pw:55rk), and extract images to the path ./SubImageNet224/
  2. Generate adversarial examples and save them into path ./adv_images/. For ResNet-152 as the source model,
    python attack_sgm.py --gamma 0.2 --output_dir adv_images --arch densenet201 --batch-size 40
    For DenseNet-201 as the source model,
    python attack_sgm.py --gamma 0.5 --output_dir adv_images --arch resnet152 --batch-size 40
  3. Evaluate the transerability of generated adversarial examples in ./adv_images/. For VGG19 with batch norm as the target model
    python evaluate.py --input_dir adv_images --arch vgg19_bn

Results

  • Visualization

  • Reproduced results

We run this code, and the attack success (1 - acc) against VGG19 is close to the repored in our paper:

Source \ Method PGD MI SGM
ResNet-152 45.80% 66.70% 81.04%
DenseNet-201 57.82% 75.38% 82.58%

Implementation

For easier reproduction, we provide more detailed information here.

register backward hook for SGM

In fact, we manipulate gradients flowing through ReLU in utils_sgm, since there is no ReLU in skip-connections:

  • For ResNet, there are "downsampling" modules in which skip-connections are replaced by a conv layer. We do not manipulate gradients of "downsampling" module;

  • For DenseNet, we manipulate gradients in all dense block.

Pretrained models

All pretrained models in our paper can be found online:

Citing this work

@inproceedings{wu2020skip,
    title={Skip connections matter: On the transferability of adversarial examples generated with resnets},
    author={Wu, Dongxian and Wang, Yisen and Xia, Shu-Tao and Bailey, James and Ma, Xingjun},
    booktitle={ICLR},
    year={2020}
}

Reference

[1] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In CVPR, 2018.

[2] Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick McDaniel. Ensemble Adversarial Training: Attacks and Defenses. In ICLR, 2018.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].