All Projects → YiZeng623 → Advanced Gradient Obfuscating

YiZeng623 / Advanced Gradient Obfuscating

Licence: mit
Take further steps in the arms race of adversarial examples with only preprocessing.

Projects that are alternatives of or similar to Advanced Gradient Obfuscating

Additive Margin Softmax
This is the implementation of paper <Additive Margin Softmax for Face Verification>
Stars: ✭ 464 (+1557.14%)
Mutual labels:  jupyter-notebook, deeplearning
Tf Dann
Domain-Adversarial Neural Network in Tensorflow
Stars: ✭ 556 (+1885.71%)
Mutual labels:  jupyter-notebook, adversarial-learning
Introtodeeplearning
Lab Materials for MIT 6.S191: Introduction to Deep Learning
Stars: ✭ 4,955 (+17596.43%)
Mutual labels:  jupyter-notebook, deeplearning
Deep Learning Resources
由淺入深的深度學習資源 Collection of deep learning materials for everyone
Stars: ✭ 422 (+1407.14%)
Mutual labels:  jupyter-notebook, deeplearning
Basic reinforcement learning
An introductory series to Reinforcement Learning (RL) with comprehensive step-by-step tutorials.
Stars: ✭ 826 (+2850%)
Mutual labels:  jupyter-notebook, deeplearning
Monk object detection
A one-stop repository for low-code easily-installable object detection pipelines.
Stars: ✭ 437 (+1460.71%)
Mutual labels:  jupyter-notebook, deeplearning
Deeplearning
深度学习入门教程, 优秀文章, Deep Learning Tutorial
Stars: ✭ 6,783 (+24125%)
Mutual labels:  jupyter-notebook, deeplearning
Magnet
Deep Learning Projects that Build Themselves
Stars: ✭ 351 (+1153.57%)
Mutual labels:  jupyter-notebook, deeplearning
Ai Series
📚 [.md & .ipynb] Series of Artificial Intelligence & Deep Learning, including Mathematics Fundamentals, Python Practices, NLP Application, etc. 💫 人工智能与深度学习实战,数理统计篇 | 机器学习篇 | 深度学习篇 | 自然语言处理篇 | 工具实践 Scikit & Tensoflow & PyTorch 篇 | 行业应用 & 课程笔记
Stars: ✭ 702 (+2407.14%)
Mutual labels:  jupyter-notebook, deeplearning
Deeplearningmugenknock
でぃーぷらーにんぐを無限にやってディープラーニングでDeepLearningするための実装CheatSheet
Stars: ✭ 684 (+2342.86%)
Mutual labels:  jupyter-notebook, deeplearning
Pytorch Original Transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
Stars: ✭ 411 (+1367.86%)
Mutual labels:  jupyter-notebook, deeplearning
Concise Ipython Notebooks For Deep Learning
Ipython Notebooks for solving problems like classification, segmentation, generation using latest Deep learning algorithms on different publicly available text and image data-sets.
Stars: ✭ 23 (-17.86%)
Mutual labels:  jupyter-notebook, deeplearning
Text summurization abstractive methods
Multiple implementations for abstractive text summurization , using google colab
Stars: ✭ 359 (+1182.14%)
Mutual labels:  jupyter-notebook, deeplearning
Pytorch tutorial
PyTorch Tutorial (1.7)
Stars: ✭ 450 (+1507.14%)
Mutual labels:  jupyter-notebook, deeplearning
Portrait Segmentation
Real-time portrait segmentation for mobile devices
Stars: ✭ 358 (+1178.57%)
Mutual labels:  jupyter-notebook, deeplearning
Monk v1
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.
Stars: ✭ 480 (+1614.29%)
Mutual labels:  jupyter-notebook, deeplearning
Pytorch Tutorials Examples And Books
PyTorch1.x tutorials, examples and some books I found 【不定期更新】整理的PyTorch 1.x 最新版教程、例子和书籍
Stars: ✭ 346 (+1135.71%)
Mutual labels:  jupyter-notebook, deeplearning
Action Recognition Visual Attention
Action recognition using soft attention based deep recurrent neural networks
Stars: ✭ 350 (+1150%)
Mutual labels:  jupyter-notebook, deeplearning
Deeplearning Assignment
深度学习笔记
Stars: ✭ 619 (+2110.71%)
Mutual labels:  jupyter-notebook, deeplearning
Advertorch
A Toolbox for Adversarial Robustness Research
Stars: ✭ 826 (+2850%)
Mutual labels:  jupyter-notebook, adversarial-learning

Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques

This repository is the official implementation of Mitigating Advanced Adversarial Attacks with More Advanced Gradient Obfuscation Techniques.

Introduction

Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversarial Examples (AEs). Recently, advanced gradient-based attacks were proposed (e.g., BPDA and EOT) and defeated a considerable number of existing defense methods. Up to today, there are still no satisfactory solutions that can effectively and efficiently defend these attacks.

In this work, we make a steady step towards mitigating those advanced gradient-based attacks with two major contributions.

First, we perform an in-depth analysis of the root causes of those attacks and propose four properties that can break the fundamental assumptions of those attacks.

Second, we identify a set of operations that can meet those properties. By integrating these operations, we design a preprocessing method (FD+RDG) that can invalidate these powerful attacks. Extensive evaluations indicate that our solutions can effectively mitigate all existing standard and advanced attack techniques, and beat state-of-the-art defense solutions published in top-tier conferences over the past 2 years.

To be noticed, our work does not require retraining of the model or any modification of the target model. Thus even for large models like inception families, our work can protect them on the fly. An illustration of the working manner of pre-processing based defense techniques is summarized as follows: avatar

For attacks used to demonstrate our proposed theory's efficiency:

  1. This repository includes detailed implementation of interactive attacks includes BPDA, BPDA+EOT, and Semi-Brute-Force attack based on the EOT. Some of the code here is referring to Anish & Carlini's Github: obfuscated-gradients.
  2. This repository includes the implementation of standard attacks includes FGSM, BIM, LBFGS, C&W, and DeepFool. All the attacks here listed are based on cleverhans.

For defenses compared with our proposed techniques against the aforementioned attacks:

  1. We compared our methods with Randomization Layer, Feature Distillation, SHIELD, Total Variance, Jepg compression, Bit-depth Reduction, Pixel Deflection, and the original Inception V3 with no defense as the baseline under the BPDA attack.
  2. We compared our methods with Randomization Layer, Random Crop under the BPDA+EOT attack.

Demonstration

For a single image as the input, defense effects (Random Distortion over Grids, RDG) are demonstrated.

  1. 'BPDA_demo.ipynb' shows the effects of our method protecting an input against the BPDA attack.
  2. 'EOT_demo.ipynb' shows the effects of our method protecting an input against the BPDA+EOT attack.
  3. A visual demo of the random distortion over grids, which is as well proposed in our paper, is presented in 'RDG_visual_demo.ipynb'. avatar

Requirements

To install requirements and pre-trained Inception V3 model:

sudo sh setup.sh

The environment is set up as follows: Python 3.7; cleverhans 2.1.0; Tensorflow 1.13.1; albumentations 0.4.3; scipy 1.3.1. The 100 clean data selected from the ImageNet Validation set can be download following this link: Google Drive Link. Please carefully check before playing with this repository see if all the data related files are placed into the '/data/' folder.

Evaluation

  1. The evaluation comparing different defense methods against the BPDA attack is conducted in 'BPDA_compare.ipynb'.
  2. The evaluation comparing different defense methods against the BPDA+EOT attack is conducted in 'EOT_compare.ipynb'.
  3. The evaluation comparing our proposed methods against Semi-Brute-Force attack based on the EOT is conducted in 'adaptive_EOT_test.ipynb'.
  4. We generated standard adversarial examples, namely based on FGSM, BIM, LBFGS, C&W, and DeepFool to test our proposed technique in 'AE_generation.ipynb'.

Results

Comparision of different defense methods over BPDA:

Attack success rate:

avatar

Model accuracy:

avatar

Comparision of different defense methods over BPDA+EOT:

Attack success rate:

avatar

Model accuracy:

avatar

Comparision of different defense methods over Semi-Brute-Force Attack based on EOT:

Attack success rate:

avatar

Model accuracy:

avatar

Contributing

If you'd like to contribute or have any suggestions for this repository, you can directly open an issue on this GitHub repository. All contributions welcome!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].