All Projects → Harry24k → PGD-pytorch

Harry24k / PGD-pytorch

Licence: MIT license
A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to PGD-pytorch

Nlpaug
Data augmentation for NLP
Stars: ✭ 2,761 (+3226.51%)
Mutual labels:  adversarial-attacks
geometric adv
Geometric Adversarial Attacks and Defenses on 3D Point Clouds (3DV 2021)
Stars: ✭ 20 (-75.9%)
Mutual labels:  adversarial-attacks
chop
CHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.
Stars: ✭ 68 (-18.07%)
Mutual labels:  adversarial-attacks
Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+3078.31%)
Mutual labels:  adversarial-attacks
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-48.19%)
Mutual labels:  adversarial-attacks
AdvPC
AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds (ECCV 2020)
Stars: ✭ 35 (-57.83%)
Mutual labels:  adversarial-attacks
square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+7.23%)
Mutual labels:  adversarial-attacks
FLAT
[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (-37.35%)
Mutual labels:  adversarial-attacks
generative adversary
Code for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (-30.12%)
Mutual labels:  adversarial-attacks
hard-label-attack
Natural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (-68.67%)
Mutual labels:  adversarial-attacks
grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (-15.66%)
Mutual labels:  adversarial-attacks
Pro-GNN
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+143.37%)
Mutual labels:  adversarial-attacks
flowattack
Attacking Optical Flow (ICCV 2019)
Stars: ✭ 58 (-30.12%)
Mutual labels:  adversarial-attacks
Foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+2439.76%)
Mutual labels:  adversarial-attacks
trojanzoo
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Stars: ✭ 178 (+114.46%)
Mutual labels:  adversarial-attacks
T3
[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-69.88%)
Mutual labels:  adversarial-attacks
POPQORN
An Algorithm to Quantify Robustness of Recurrent Neural Networks
Stars: ✭ 44 (-46.99%)
Mutual labels:  adversarial-attacks
AWP
Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Stars: ✭ 114 (+37.35%)
Mutual labels:  adversarial-attacks
procedural-advml
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (-43.37%)
Mutual labels:  adversarial-attacks
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-30.12%)
Mutual labels:  adversarial-attacks

PGD-pytorch

A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"

Summary

This code is a pytorch implementation of PGD attack
In this code, I used above methods to fool Inception v3.
'Giant Panda' used for an example.
You can add other pictures with a folder with the label name in the 'data/imagenet'.

Requirements

  • python==3.6
  • numpy==1.14.2
  • pytorch==1.0.1

Important results not in the code

  • Capacity(size of network) plays an important role in adversarial training. (p.9-10)
    • For only natural examples training, it increases the robustness against one-step perturbations.
    • For PGD adversarial training, small capacity networks fails.
    • As capacity increases, the model can fit the adversairal examples increasingly well.
    • More capacity and strong adversaries decrease transferability. (Section B)
  • FGSM adversaries don't increase robustness for large epsilon(=8). (p.9-10)
    • The network overfit to FGSM adversarial examples.
  • Adversarial training with PGD shows good enough defense results.(p.12-13)

Notice

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].