All Projects → yangarbiter → robust-local-lipschitz

yangarbiter / robust-local-lipschitz

Licence: other
A Closer Look at Accuracy vs. Robustness

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
shell
77523 projects

Projects that are alternatives of or similar to robust-local-lipschitz

Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+3417.33%)
Mutual labels:  adversarial-machine-learning, adversarial-examples
tulip
Scaleable input gradient regularization
Stars: ✭ 19 (-74.67%)
Mutual labels:  adversarial-machine-learning, adversarial-examples
procedural-advml
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (-37.33%)
Mutual labels:  adversarial-machine-learning, adversarial-examples
awesome-machine-learning-reliability
A curated list of awesome resources regarding machine learning reliability.
Stars: ✭ 31 (-58.67%)
Mutual labels:  adversarial-machine-learning, adversarial-examples
Adversarial-Distributional-Training
Adversarial Distributional Training (NeurIPS 2020)
Stars: ✭ 52 (-30.67%)
Mutual labels:  robustness, adversarial-machine-learning
adversarial-vision-challenge
NIPS Adversarial Vision Challenge
Stars: ✭ 39 (-48%)
Mutual labels:  robustness, adversarial-examples
pre-training
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Stars: ✭ 90 (+20%)
Mutual labels:  robustness, adversarial-examples
DUN
Code for "Depth Uncertainty in Neural Networks" (https://arxiv.org/abs/2006.08437)
Stars: ✭ 65 (-13.33%)
Mutual labels:  robustness
Metric Learning Adversarial Robustness
Code for NeurIPS 2019 Paper
Stars: ✭ 44 (-41.33%)
Mutual labels:  robustness
NeuralNetworkAnalysis.jl
Reachability analysis for closed-loop control systems
Stars: ✭ 37 (-50.67%)
Mutual labels:  robustness
CIL-ReID
Benchmarks for Corruption Invariant Person Re-identification. [NeurIPS 2021 Track on Datasets and Benchmarks]
Stars: ✭ 71 (-5.33%)
Mutual labels:  robustness
Adversarial-Patch-Training
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
Stars: ✭ 30 (-60%)
Mutual labels:  adversarial-machine-learning
eleanor
Code used during my Chaos Engineering and Resiliency Patterns talk.
Stars: ✭ 14 (-81.33%)
Mutual labels:  robustness
ThermometerEncoding
reproduction of Thermometer Encoding: One Hot Way To Resist Adversarial Examples in pytorch
Stars: ✭ 15 (-80%)
Mutual labels:  adversarial-machine-learning
adversarial-code-generation
Source code for the ICLR 2021 work "Generating Adversarial Computer Programs using Optimized Obfuscations"
Stars: ✭ 16 (-78.67%)
Mutual labels:  adversarial-machine-learning
safe-control-gym
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and RL
Stars: ✭ 272 (+262.67%)
Mutual labels:  robustness
adversarial-attacks
Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Stars: ✭ 90 (+20%)
Mutual labels:  adversarial-examples
avc nips 2018
Code to reproduce the attacks and defenses for the entries "JeromeR" in the NIPS 2018 Adversarial Vision Challenge
Stars: ✭ 18 (-76%)
Mutual labels:  adversarial-examples
ATMC
[NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: A Unified Optimization Framework”
Stars: ✭ 41 (-45.33%)
Mutual labels:  robustness
adv-dnn-ens-malware
adversarial examples, adversarial malware examples, adversarial malware detection, adversarial deep ensemble, Android malware variants
Stars: ✭ 33 (-56%)
Mutual labels:  adversarial-examples

A Closer Look at Accuracy vs. Robustness

This repo contains the implementation of experiments in the paper

A Closer Look at Accuracy vs. Robustness

Authors: Yao-Yuan Yang*, Cyrus Rashtchian*, Hongyang Zhang, Ruslan Salakhutdinov, Kamalika Chaudhuri (* equal contribution)

Appeared in NeurIPS 2020 (link)

Abstract

Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning. We take a closer look at this phenomenon and first show that real image datasets are actually separated. With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy. Through extensive experiments with robustness methods, we argue that the gap between theory and practice arises from two limitations of current methods: either they fail to impose local Lipschitzness or they are insufficiently generalized. We explore combining dropout with robust training methods and obtain better generalization. We conclude that achieving robustness and accuracy in practice may require using methods that impose local Lipschitzness and augmenting them with deep learning generalization techniques.

Setup

Install requiremented libraries

pip install -r ./requirements.txt

Install cleverhans from its github repository

pip install --upgrade git+https://github.com/tensorflow/cleverhans.git#egg=cleverhans

Generate the Restricted ImageNet dataset

Use the script ./scripts/restrictedImgNet.py to generate restrictedImgNet dataset and put the data in ./data/RestrictedImgNet/ with torchvision ImageFolder readable format. For more detail, please refer to lolip/dataset/__init__.py.

Repository structure

Parameters

The default training parameters are set in lolip/models/__init__.py

The network architectures defined in lolip/models/torch_utils/archs.py

Algorithm implementations

Defense Algorithms

Attack Algorithms

Example options for model parameter

arch: ("CNN001", "CNN002", "WRN_40_10", "WRN_40_10_drop20", "WRN_40_10_drop50", "ResNet50", "ResNet50_drop50")

  • Natural: ce-tor-{arch}
  • TRADES(beta=6): strades6ce-tor-{arch}
  • adversarial training: advce-tor-{arch}
  • RST(lambda=2): advbeta2ce-tor-{arch}
  • TULIP(gradient regularization): tulipce-tor-{arch}
  • LLR: sllrce-tor-{arch}

Examples

Run Natural training with CNN001 on the MNIST dataset Perturbation distance is set to $0.1$ with L infinity norm. Batch size is $64$ and using the SGD optimizer (default parameters).

python ./main.py --experiment experiment01 \
  --no-hooks \
  --norm inf --eps 0.1 \
  --dataset mnist \
  --model ce-tor-CNN001 \
  --attack pgd \
  --random_seed 0

Run TRADES (beta=6) with Wide ResNet 40-10 on the Cifar10 dataset Perturbation distance is set to 0.031 with L infinity norm. Batch size is $64$ and using the SGD optimizer

python ./main.py --experiment experiment01 \
  --no-hooks \
  --norm inf --eps 0.031 \
  --dataset cifar10 \
  --model strades6ce-tor-WRN_40_10 \
  --attack pgd \
  --random_seed 0

Run adversarial training with ResNet50 on the Restricted ImageNet dataset. Perturbation distance is set to 0.005 with L infinity norm. Attack with PGD attack. Batch size is $128$ and using the Adam optimizer

python ./main.py --experiment restrictedImgnet \
  --no-hooks \
  --norm inf --eps 0.005 \
  --dataset resImgnet112v3 \
  --model advce-tor-ResNet50-adambs128 \
  --attack pgd \
  --random_seed 0

Reproducing Results

Scripts

Appendix C: Proof-of-concept classifier

Run Robust self training (lambda=2) with Wide ResNet 40-10 on the Cifar10 dataset Perturbation distance is set to 0.031 with L infinity norm. Batch size is $64$ and using the SGD optimizer

python ./main.py --experiment hypo \
  --no-hooks \
  --norm inf --eps 0.031 \
  --dataset cifar10 \
  --model advbeta2ce-tor-WRN_40_10 \
  --attack pgd \
  --random_seed 0
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].