All Projects → sarathknv → Adversarial Examples Pytorch

sarathknv / Adversarial Examples Pytorch

Implementation of Papers on Adversarial Examples

Programming Languages

python
139335 projects - #7 most used programming language
python3
1442 projects

Projects that are alternatives of or similar to Adversarial Examples Pytorch

Grenade
Deep Learning in Haskell
Stars: ✭ 1,338 (+356.66%)
Mutual labels:  deep-neural-networks, generative-adversarial-networks
Zi2zi
Learning Chinese Character style with conditional GAN
Stars: ✭ 1,988 (+578.5%)
Mutual labels:  deep-neural-networks, generative-adversarial-networks
Selfdrivingcar
A collection of all projects pertaining to different layers in the SDC software stack
Stars: ✭ 107 (-63.48%)
Mutual labels:  deep-neural-networks, opencv
Quickdraw
Implementation of Quickdraw - an online game developed by Google
Stars: ✭ 805 (+174.74%)
Mutual labels:  deep-neural-networks, opencv
Caffe2 Ios
Caffe2 on iOS Real-time Demo. Test with Your Own Model and Photos.
Stars: ✭ 221 (-24.57%)
Mutual labels:  deep-neural-networks, opencv
Tensorflow object counting api
🚀 The TensorFlow Object Counting API is an open source framework built on top of TensorFlow and Keras that makes it easy to develop object counting systems!
Stars: ✭ 956 (+226.28%)
Mutual labels:  deep-neural-networks, opencv
Unsupervised detection
An Unsupervised Learning Framework for Moving Object Detection From Videos
Stars: ✭ 139 (-52.56%)
Mutual labels:  deep-neural-networks, adversarial-learning
Repo 2018
Deep Learning Summer School + Tensorflow + OpenCV cascade training + YOLO + COCO + CycleGAN + AWS EC2 Setup + AWS IoT Project + AWS SageMaker + AWS API Gateway + Raspberry Pi3 Ubuntu Core
Stars: ✭ 163 (-44.37%)
Mutual labels:  opencv, generative-adversarial-networks
Learnopencv
Learn OpenCV : C++ and Python Examples
Stars: ✭ 15,385 (+5150.85%)
Mutual labels:  deep-neural-networks, opencv
Bmw Yolov4 Inference Api Cpu
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Stars: ✭ 180 (-38.57%)
Mutual labels:  deep-neural-networks, opencv
Adversarial video generation
A TensorFlow Implementation of "Deep Multi-Scale Video Prediction Beyond Mean Square Error" by Mathieu, Couprie & LeCun.
Stars: ✭ 662 (+125.94%)
Mutual labels:  deep-neural-networks, adversarial-networks
adapt
Awesome Domain Adaptation Python Toolbox
Stars: ✭ 46 (-84.3%)
Mutual labels:  adversarial-networks, adversarial-learning
Segan
Speech Enhancement Generative Adversarial Network in TensorFlow
Stars: ✭ 661 (+125.6%)
Mutual labels:  deep-neural-networks, generative-adversarial-networks
Mobilnet ssd opencv
MobilNet-SSD object detection in opencv 3.4.1
Stars: ✭ 64 (-78.16%)
Mutual labels:  deep-neural-networks, opencv
Vehicle counting tensorflow
🚘 "MORE THAN VEHICLE COUNTING!" This project provides prediction for speed, color and size of the vehicles with TensorFlow Object Counting API.
Stars: ✭ 582 (+98.63%)
Mutual labels:  deep-neural-networks, opencv
Gpnd
Generative Probabilistic Novelty Detection with Adversarial Autoencoders
Stars: ✭ 112 (-61.77%)
Mutual labels:  deep-neural-networks, adversarial-learning
Delving Deep Into Gans
Generative Adversarial Networks (GANs) resources sorted by citations
Stars: ✭ 834 (+184.64%)
Mutual labels:  generative-adversarial-networks, adversarial-networks
Handwriting recogition using adversarial learning
[CVPR 2019] "Handwriting Recognition in Low-resource Scripts using Adversarial Learning ”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
Stars: ✭ 52 (-82.25%)
Mutual labels:  generative-adversarial-networks, adversarial-learning
Sign Language Interpreter Using Deep Learning
A sign language interpreter using live video feed from the camera.
Stars: ✭ 157 (-46.42%)
Mutual labels:  deep-neural-networks, opencv
Computer Vision Guide
📖 This guide is to help you understand the basics of the computerized image and develop computer vision projects with OpenCV. Includes Python, Java, JavaScript, C# and C++ examples.
Stars: ✭ 244 (-16.72%)
Mutual labels:  deep-neural-networks, opencv

Implementation of Papers on Adversarial Examples

Implementation of papers with real-time visualizations and parameter control.

Dependencies

  • Python3
  • PyTorch (built from source)
  • OpenCV
  • NumPy
  • SciPy
  • TensorBoard

Contents


Random Perturbations

From one of the first papers on Adversarial examples - Explaining and Harnessing Adversarial Examples,

The direction of perturbation, rather than the specific point in space, matters most. Space is not full of pockets of adversarial examples that finely tile the reals like the rational numbers.

This project examines this idea by testing the robustness of a DNN to randomly generated perturbations.

Usage

$ python3 explore_space.py --img images/horse.png

Demo

fgsm.gif

This code adds to the input image (img) a randomly generated perturbation (vec1) which is subjected to a max norm constraint eps. This adversarial image lies on a hypercube centerd around the original image. To explore a region (a hypersphere) around the adversarial image (img + vec1), we add to it another perturbation (vec2) which is constrained by L2 norm rad.
Pressing keys e and r generates new vec1 and vec2 respectively.

Random Perturbations

The classifier is robust to these random perturbations even though they have severely degraded the image. Perturbations are clearly noticeable and have significantly higher max norm.

horse_explore automobile_explore truck_explore
horse automobile : truck :

In above images, there is no change in class labels and very small drops in probability.

FGSM Perturbations

A properly directed perturbation with max norm as low as 3, which is almost imperceptible, can fool the classifier.

horse_scaled horse_adversarial perturbation
horse predicted - dog perturbation (eps = 6)

Fast Gradient Sign Method (FGSM)

Paper

Usage

  • Run the script
$ python3 fgsm_mnist.py --img one.jpg --gpu
$ python3 fgsm_imagenet.py --img goldfish.jpg --model resnet18 --gpu

fgsm_mnsit.py - for attack on custom model trained on MNIST whose weights are 9920.pth.tar.
fgsm_imagenet - for pretrained imagenet models - resnet18, resnet50 etc.

  • Control keys
    • use trackbar to change epsilon (max norm)
    • esc - close
    • s - save perturbation and adversarial image

Demo

fgsm.gif

Results

MNIST

Adversarial Image Perturbation
Pred: 4 eps: 38
Pred: 7 eps: 60
Pred: 8 eps: 42
Pred: 8 eps: 12
Pred: 9 eps: 17

Basic Iterative Method (Targeted and Untargeted)

Paper: Adversarial examples in the physical world

Usage

  • Run the script
$ python3 iterative.py --img images/goldfish.jpg --model resnet18 --target 4
# If argument 'target' is not specified, it is untargeted attack
  • Control keys
    • use trackbar to change epsilon (max norm of perturbation) and iter (number of iterations)
    • esc close and space to pause
    • s save perturbation and adversarial image

Demo

iterative.gif

One Pixel Attack for Fooling Deep Neural Networks

Paper

Existence of single pixel adversarial perturbations suggest that the assumption made in Explaining and Harnessing Adversarial Examples that small additive perturbation on the values of many dimensions will accumulate and cause huge change to the output, might not be necessary for explaining why natural images are sensitive to small perturbations.

Usage

$ python3 one_pixel.py --img airplane.jpg --d 3 --iters 600 --popsize 10

d is number of pixels to change (L0 norm)
iters and popsize are paprameters for Differential Evolution

Results

Attacks are typically successful for images with low confidence. For successful attacks on high confidence images increase d, i.e., number of pixels to perturb.

airplane bird cat frog horse
bird [0.8075] deer [0.8933] frog [0.8000] bird [0.6866] deer [0.9406]

AdvGAN - Generating Adversarial Examples with Adversarial Networks

Paper | IJCAI 2018

Usage

Inference

$ python3 advgan.py --img images/0.jpg --target 4 --model Model_C --bound 0.3

Each of these settings has a separate Generator trained. This code loads appropriate trained model from saved/ directory based on given arguments. As of now there are 22 Generators for different targets, different bounds (0.2 and 0.3) and target models (only Model_C for now).

Training AdvGAN (Untargeted)

$ python3 train_advgan.py --model Model_C --gpu

Training AdvGAN (Targeted)

$ python3 train_advgan.py --model Model_C --target 4 --thres 0.3 --gpu
# thres: Perturbation bound 

Use --help for other arguments available (epochs, batch_size, lr etc.)

Training Target Models (Models A, B and C)

$ python3 train_target_models.py --model Model_C

For TensorBoard visualization,

$ python3 generators.py
$ python3 discriminators.py

This code supports only MNIST dataset for now. Same notations as in paper are followed (mostly).

Results

There are few changes that have been made for model to work.

  • Generator in paper has ReLU on the last layer. If input data is normalized to [-1 1] there wouldn't be any perturbation in the negative region. As expected accuracies were poor (~10% Untargeted). So ReLU was removed. Also, data normalization had significat effect on performance. With [-1 1] accuracies were around 70%. But with [0 1] normalization accuracies were ~99%.
  • Perturbations (pert) and adversarial images (x + pert) were clipped. It's not converging otherwise.

These results are for the following settings.

  • Dataset - MNIST
  • Data normalization - [0 1]
  • thres (perturbation bound) - 0.3 and 0.2
  • No ReLU at the end in Generator
  • Epochs - 15
  • Batch Size - 128
  • LR Scheduler - step_size 5, gamma 0.1 and initial lr - 0.001
Target Acc [thres: 0.3] Acc [thres: 0.2]
Untargeted 0.9921 0.8966
0 0.9643 0.4330
1 0.9822 0.4749
2 0.9961 0.8499
3 0.9939 0.8696
4 0.9833 0.6293
5 0.9918 0.7968
6 0.9584 0.4652
7 0.9899 0.6866
8 0.9943 0.8430
9 0.9922 0.7610

Untargeted

Pred: 9 Pred: 3 Pred: 8 Pred: 8 Pred: 4 Pred: 3 Pred: 8 Pred: 3 Pred: 3 Pred: 8

Targeted

Target: 0 Target: 1 Target: 2 Target: 3 Target: 4 Target: 5 Target: 6 Target: 7 Target: 8 Target: 9
Pred: 0 Pred: 1 Pred: 2 Pred: 3 Pred: 4 Pred: 5 Pred: 6 Pred: 7 Pred: 8 Pred: 9
Pred: 0 Pred: 1 Pred: 2 Pred: 3 Pred: 4 Pred: 5 Pred: 6 Pred: 7 Pred: 8 Pred: 9
Pred: 0 Pred: 1 Pred: 2 Pred: 3 Pred: 4 Pred: 5 Pred: 6 Pred: 7 Pred: 8 Pred: 9

Spatially Transformed Adversarial Examples

Paper | ICLR 2018
Refer View Synthesis by Appearance Flow for clarity.

Usage

$ python3 stadv.py --img images/1.jpg --target 7

Requires OpenCV for real-time visualization.

Demo

0_1 1_2 2_3 3_4 4_5 5_6 6_7 7_8 8_9 9_0

Results

MNIST

Column index is target label and ground truth images are along diagonal.

tile

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].