All Projects → prabhant → synthesizing-robust-adversarial-examples

prabhant / synthesizing-robust-adversarial-examples

Licence: other
My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-AW

Programming Languages

Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to synthesizing-robust-adversarial-examples

ThermometerEncoding
reproduction of Thermometer Encoding: One Hot Way To Resist Adversarial Examples in pytorch
Stars: ✭ 15 (-75%)
Mutual labels:  adversarial-machine-learning, adversarial-example
Jupyterwith
declarative and reproducible Jupyter environments - powered by Nix
Stars: ✭ 235 (+291.67%)
Mutual labels:  reproducibility
Make Novice
Automation and Make
Stars: ✭ 122 (+103.33%)
Mutual labels:  reproducibility
Anaconda Project
Tool for encapsulating, running, and reproducing data science projects
Stars: ✭ 153 (+155%)
Mutual labels:  reproducibility
Datapackager
An R package to enable reproducible data processing, packaging and sharing.
Stars: ✭ 125 (+108.33%)
Mutual labels:  reproducibility
Popper
Container-native task automation engine.
Stars: ✭ 216 (+260%)
Mutual labels:  reproducibility
Reproducibility Guide
project page for creating a guide to reproducible research
Stars: ✭ 116 (+93.33%)
Mutual labels:  reproducibility
EAD Attack
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Stars: ✭ 34 (-43.33%)
Mutual labels:  adversarial-machine-learning
Mach Nix
Create highly reproducible python environments
Stars: ✭ 231 (+285%)
Mutual labels:  reproducibility
Nn Template
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.
Stars: ✭ 145 (+141.67%)
Mutual labels:  reproducibility
Renku
The Renku Project provides a platform and tools for reproducible and collaborative data analysis.
Stars: ✭ 141 (+135%)
Mutual labels:  reproducibility
Batchtools
Tools for computation on batch systems
Stars: ✭ 127 (+111.67%)
Mutual labels:  reproducibility
Catalyst
Accelerated deep learning R&D
Stars: ✭ 2,804 (+4573.33%)
Mutual labels:  reproducibility
Rl Medical
Deep Reinforcement Learning (DRL) agents applied to medical images
Stars: ✭ 123 (+105%)
Mutual labels:  reproducibility
fertile
creating optimal conditions for reproducibility
Stars: ✭ 52 (-13.33%)
Mutual labels:  reproducibility
Steppy
Lightweight, Python library for fast and reproducible experimentation 🔬
Stars: ✭ 119 (+98.33%)
Mutual labels:  reproducibility
Accelerator
The Accelerator is a tool for fast and reproducible processing of large amounts of data.
Stars: ✭ 137 (+128.33%)
Mutual labels:  reproducibility
Plynx
PLynx is a domain agnostic platform for managing reproducible experiments and data-oriented workflows.
Stars: ✭ 192 (+220%)
Mutual labels:  reproducibility
narps
Code related to Neuroimaging Analysis Replication and Prediction Study
Stars: ✭ 31 (-48.33%)
Mutual labels:  reproducibility
targets-tutorial
Short course on the targets R package
Stars: ✭ 87 (+45%)
Mutual labels:  reproducibility

Synthesizing robust adversarial examples

My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-AW

Project presentation: https://docs.google.com/presentation/d/1YQCYtIgGpRjVgqeMy5ytOLEel29mTOs2D5GqqzwqFCU/edit?usp=sharing

Review:

Reproducibility Report The current report has been produced as a part of ICLR reproducibility challenge http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html

Author: Prabhant Singh, University of Tartu, [email protected]

Abstract: The paper’s main goal was to provide an algorithm to generate adversarial examples that are robust across any chosen distribution of transformations. The authors demonstrated this algorithm in 2 and 3 dimensions in the paper. The authors were successfully able to demonstrate that adversarial examples are a practical concern for real-world systems. During the reproducibility of the paper, we have implemented authors’ algorithm on 2D scenario and were able to verify authors’ claim. We have also checked for transferability with the image of 3D adversarial example generated in this paper in the real-world environment. This report also checks the robustness of adversarial examples on black box scenario which was not in the selected paper.

Experimental methodology: After reproducing the Expectation Over Transformation (EOT) algorithm we have generated adversarial examples on the pre-trained inceptionV3 model trained on ImageNet dataset. The adversarial examples were robust under the predefined distribution. One interesting observation here is that whenever we rotated the image out of the distribution there was confidence reduction in case of prediction and the target class which was predefined while creating the adversarial example was within the top 10 probabilities. The probability of target class was decreased when we rotated it away from the distribution and vice versa. As the paper states there are no guarantees of adversarial examples being robust outside the chosen distribution but the adversarial example was still able to reduce the confidence of the prediction.

Transferability: The transferability was checked on four images. First image was generated by EOT and other three were of adversarial Turtle mentioned in the paper [1]. The transferability was tested on six different architectures pre-trained on the ImageNet dataset (Resnet50, InceptionV3, InceptionResnetV2, Xception, VGG16, VGG19). Our adversarial examples were generated using Tensorflow pre-trained Inception model. The transferability was checked with pre-trained keras models[2]. The results of the experiments are listed below:

Generated adversarial image using EOT Parameters: Learning rate: 2e-1 Epsilon: 8.0/255.0 True class: Tabby cat Target class: Guacmole

  1. InceptionV3: Prediction: Flatworm, Confidence : 100%
  2. InceptionResnet: Prediction: Comicbook, Confidence : 100%
  3. Xception: Prediction: Necklace, Confidence : 92.5%
  4. Resnet50: Prediction: Tabby cat, Confidence: 35%
  5. VGG 19: Prediction: Tabby cat, Confidence: 47.9%
  6. VGG16 Prediction: Tabby cat, Confidence: 34.8%

Image of 3D adversarial turtle[1] mentioned in the paper True class: Turtle

  1. InceptionV3: Prediction: Pencil sharpner, Confidence : 67.7%
  2. InceptionResnet: Prediction: Comic book, Confidence : 100%
  3. Xception: Prediction: Table lamp, Confidence : 84.8%
  4. Resnet50: Prediction: Bucket , Confidence: 20%
  5. VGG 19: Prediction: Mask, Confidence: 10.9%
  6. VGG16 Prediction: Turtle, Confidence: 3.6%

Other images of Adversarial turtle generated similar results.

Observations:

Both images of adversarial turtle and cat were detected incorrectly by inception related architectures with a high confidence. Both images were classified as “Comic book” with 100 percent confidence by InceptionResnetV2. The adversarial examples were able to reduce the confidence by a high margin, about 50-60 percent in case of Tabbycat. Only VGG16 was able to classify the turtle correctly but by a very low confidence of 3.6% Similar results were found when we rotated, cropped and zoomed out of the image.[3] In case of adversarial turtle, the photo was taken out of the distribution(Not inside the chosen distribution as mentioned in the paper ie camera distance between 2.5cm -3.0cm) ,still the image was misclassified.

Conclusion:

The author successfully generated robust adversarial examples which are robust under the given distribution in case of targeted misclassification. The adversarial examples were also robust in case of untargeted misclassification under any distribution if classified against Inception related models.The adversarial examples reduced confidence by a wide margin in case of non-inception architectures. The image of 3D adversarial turtle can be considered robust under any distribution as it has been misclassified against all the architectures and only classified correctly by VGG16 but with a very insignificant percentage.

Sources: [1] The Image of adversarial turtle was taken at the recent NIPS conference by a number of viewpoints out of the given distribution.

[2] Pre-trained keras models: https://keras.io/applications/

[3] The source code and experiments info can be found in this Github repo: https://github.com/prabhant/synthesizing-robust-adversarial-examples

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].