All Projects → utkuozbulak → Pytorch Cnn Adversarial Attacks

utkuozbulak / Pytorch Cnn Adversarial Attacks

Licence: mit
Pytorch implementation of convolutional neural network adversarial attack techniques

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Cnn Adversarial Attacks

pytorch-beautiful-ml-data
PyData Global Tutorial on Data Patterns and OOP abstractions for Deep Learning using PyTorch
Stars: ✭ 13 (-95.86%)
Mutual labels:  pytorch-tutorial
fast-style-transfer-tutorial-pytorch
Simple Tutorials & Code Implementation of fast-style-transfer(Perceptual Losses for Real-Time Style Transfer and Super-Resolution, 2016 ECCV) using PyTorch.
Stars: ✭ 18 (-94.27%)
Mutual labels:  pytorch-tutorial
Pytorch Lesson Zh
pytorch 包教不包会
Stars: ✭ 279 (-11.15%)
Mutual labels:  pytorch-tutorial
pytorch-tutorial
PyTorch深度学习快速入门教程(绝对通俗易懂!)
Stars: ✭ 798 (+154.14%)
Mutual labels:  pytorch-tutorial
caltech birds
A set of notebooks as a guide to the process of fine-grained image classification of birds species, using PyTorch based deep neural networks.
Stars: ✭ 29 (-90.76%)
Mutual labels:  pytorch-tutorial
Pytorch Cheatsheet
Check out improved:
Stars: ✭ 256 (-18.47%)
Mutual labels:  pytorch-tutorial
PyTorchStepByStep
Official repository of my book: "Deep Learning with PyTorch Step-by-Step: A Beginner's Guide"
Stars: ✭ 314 (+0%)
Mutual labels:  pytorch-tutorial
Capsule net pytorch
Readable implementation of a Capsule Network as described in "Dynamic Routing Between Capsules" [Hinton et. al.]
Stars: ✭ 301 (-4.14%)
Mutual labels:  pytorch-tutorial
Ensemble-Pytorch
A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.
Stars: ✭ 407 (+29.62%)
Mutual labels:  pytorch-tutorial
Pytorch Image Classification
Tutorials on how to implement a few key architectures for image classification using PyTorch and TorchVision.
Stars: ✭ 272 (-13.38%)
Mutual labels:  pytorch-tutorial
pytorch-cpp-tutorial
Example code to create and train a Pytorch model using the new C++ frontend.
Stars: ✭ 16 (-94.9%)
Mutual labels:  pytorch-tutorial
Deep-Learning-with-PyTorch-A-60-Minute-Blitz-cn
PyTorch1.0 深度学习:60分钟入门与实战(Deep Learning with PyTorch: A 60 Minute Blitz 中文翻译与学习)
Stars: ✭ 127 (-59.55%)
Mutual labels:  pytorch-tutorial
A Pytorch Tutorial To Sequence Labeling
Empower Sequence Labeling with Task-Aware Neural Language Model | a PyTorch Tutorial to Sequence Labeling
Stars: ✭ 257 (-18.15%)
Mutual labels:  pytorch-tutorial
tutorials-kr
🇰🇷파이토치에서 제공하는 튜토리얼의 한국어 번역을 위한 저장소입니다. (Translate PyTorch tutorials in Korean🇰🇷)
Stars: ✭ 271 (-13.69%)
Mutual labels:  pytorch-tutorial
Thinking In Tensors Writing In Pytorch
Thinking in tensors, writing in PyTorch (a hands-on deep learning intro)
Stars: ✭ 287 (-8.6%)
Mutual labels:  pytorch-tutorial
ATA-GAN
Demo code for Attention-Aware Generative Adversarial Networks paper
Stars: ✭ 13 (-95.86%)
Mutual labels:  pytorch-tutorial
Multi-Agent-Diverse-Generative-Adversarial-Networks
Easy-to-follow Pytorch tutorial Notebook for Multi-Agent-Diverse-Generative-Adversarial-Networks
Stars: ✭ 23 (-92.68%)
Mutual labels:  pytorch-tutorial
Blitz Bayesian Deep Learning
A simple and extensible library to create Bayesian Neural Network layers on PyTorch.
Stars: ✭ 305 (-2.87%)
Mutual labels:  pytorch-tutorial
Pytorch Nlp Notebooks
Learn how to use PyTorch to solve some common NLP problems with deep learning.
Stars: ✭ 293 (-6.69%)
Mutual labels:  pytorch-tutorial
Pytorch Kaggle Starter
Pytorch starter kit for Kaggle competitions
Stars: ✭ 268 (-14.65%)
Mutual labels:  pytorch-tutorial

Convolutional Neural Network Adversarial Attacks

Note: I am aware that there are some issues with the code, I will update this repository soon (Also will move away from cv2 to PIL).

This repo is a branch off of CNN Visualisations because it was starting to get bloated. It contains following CNN adversarial attacks implemented in Pytorch:

  • Fast Gradient Sign, Untargeted [1]
  • Fast Gradient Sign, Targeted [1]
  • Gradient Ascent, Adversarial Images [2]
  • Gradient Ascent, Fooling Images (Unrecognizable images predicted as classes with high confidence) [2]

It will also include more adverisarial attack and defenses techniques in the future as well.

The code uses pretrained AlexNet in the model zoo. You can simply change it with your model but don't forget to change target class parameters as well.

All images are pre-processed with mean and std of the ImageNet dataset before being fed to the model. None of the code uses GPU as these operations are quite fast (for a single image). You can make use of gpu with very little effort. The examples below include numbers in the brackets after the description, like Mastiff (243), this number represents the class id in the ImageNet dataset.

I tried to comment on the code as much as possible, if you have any issues understanding it or porting it, don't hesitate to reach out.

Below, are some sample results for each operation.

Fast Gradient Sign - Untargeted

In this operation we update the original image with signs of the received gradient on the first layer. Untargeted version aims to reduce the confidence of the initial class. The code breaks as soon as the image stops being classified as the original label.

Predicted as Eel (390)
Confidence: 0.96
Adversarial Noise Predicted as Blowfish (397)
Confidence: 0.81
Predicted as Snowbird (13)
Confidence: 0.99
Adversarial Noise Predicted as Chickadee (19)
Confidence: 0.95

Fast Gradient Sign - Targeted

Targeted version of FGS works almost the same as the untargeted version. The only difference is that we do not try to minimize the original label but maximize the target label. The code breaks as soon as the image is predicted as the target class.

Predicted as Apple (948)
Confidence: 0.95
Adversarial Noise Predicted as Rock python (62)
Confidence: 0.16
Predicted as Apple (948)
Confidence: 0.95
Adversarial Noise Predicted as Mud turtle (35)
Confidence: 0.54

Gradient Ascent - Fooling Image Generation

In this operation we start with a random image and continously update the image with targeted backpropagation (for a certain class) and stop when we achieve target confidence for that class. All of the below images are generated from pretrained AlexNet to fool it.

Predicted as Zebra (340)
Confidence: 0.94
Predicted as Bow tie (457)
Confidence: 0.95
Predicted as Castle (483)
Confidence: 0.99

Gradient Ascent - Adversarial Image Generation

This operation works exactly same as the previous one. The only important thing is that keeping learning rate a bit smaller so that the image does not receive huge updates so that it will continue to look like the originial. As it can be seen from samples, on some images it is almost impossible to recognize the difference between two images but on others it can clearly be observed that something is wrong. All of the examples below were created from and tested on AlexNet to fool it.

Predicted as Eel (390)
Confidence: 0.96
Predicted as Apple (948)
Confidence: 0.95
Predicted as Snowbird (13)
Confidence: 0.99
Predicted as Banjo (420)
Confidence: 0.99
Predicted as Abacus (398)
Confidence: 0.99
Predicted as Dumbell (543)
Confidence: 1

Requirements:

torch >= 0.2.0.post4
torchvision >= 0.1.9
numpy >= 1.13.0
opencv >= 3.1.0

References:

[1] I. J. Goodfellow, J. Shlens, C. Szegedy. Explaining and Harnessing Adversarial Examples https://arxiv.org/abs/1412.6572

[2] A. Nguyen, J. Yosinski, J. Clune. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images https://arxiv.org/abs/1412.1897

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].