All Projects → jsikyoon → Adv_attack_capsnet

jsikyoon / Adv_attack_capsnet

Tensorflow Implementation of Adversarial Attack to Capsule Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Adv attack capsnet

Hands On Deep Learning Algorithms With Python
Master Deep Learning Algorithms with Extensive Math by Implementing them using TensorFlow
Stars: ✭ 272 (+60.95%)
Mutual labels:  capsule-network
Lung Diseases Classifier
Diseases Detection from NIH Chest X-ray data
Stars: ✭ 52 (-69.23%)
Mutual labels:  capsule-network
Capse
A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization (NAACL 2019)
Stars: ✭ 114 (-32.54%)
Mutual labels:  capsule-network
Capslayer
CapsLayer: An advanced library for capsule theory
Stars: ✭ 351 (+107.69%)
Mutual labels:  capsule-network
Awesome Capsule Networks
A curated list of awesome resources related to capsule networks
Stars: ✭ 896 (+430.18%)
Mutual labels:  capsule-network
Capsnet tensorflow
Stars: ✭ 72 (-57.4%)
Mutual labels:  capsule-network
EasyRec
A framework for large scale recommendation algorithms.
Stars: ✭ 599 (+254.44%)
Mutual labels:  capsule-network
Capsule Net Pytorch
[NO MAINTENANCE INTENDED] A PyTorch implementation of CapsNet architecture in the NIPS 2017 paper "Dynamic Routing Between Capsules".
Stars: ✭ 158 (-6.51%)
Mutual labels:  capsule-network
Capsgnn
A PyTorch implementation of "Capsule Graph Neural Network" (ICLR 2019).
Stars: ✭ 1,008 (+496.45%)
Mutual labels:  capsule-network
Deep Learning With Csharp And Cntk
Deep Learning with C# and CNTK
Stars: ✭ 108 (-36.09%)
Mutual labels:  capsule-network
Capsnet Visualization
🎆 A visualization of the CapsNet layers to better understand how it works
Stars: ✭ 371 (+119.53%)
Mutual labels:  capsule-network
Capsule Network Tutorial
Pytorch easy-to-follow Capsule Network tutorial
Stars: ✭ 722 (+327.22%)
Mutual labels:  capsule-network
Variational Capsule Routing
Official Pytorch code for (AAAI 2020) paper "Capsule Routing via Variational Bayes", https://arxiv.org/pdf/1905.11455.pdf
Stars: ✭ 84 (-50.3%)
Mutual labels:  capsule-network
Capsule net pytorch
Readable implementation of a Capsule Network as described in "Dynamic Routing Between Capsules" [Hinton et. al.]
Stars: ✭ 301 (+78.11%)
Mutual labels:  capsule-network
Capsule Gan
Code for my Master thesis on "Capsule Architecture as a Discriminator in Generative Adversarial Networks".
Stars: ✭ 120 (-28.99%)
Mutual labels:  capsule-network
Hands-On-Deep-Learning-Algorithms-with-Python
Hands-On Deep Learning Algorithms with Python, By Packt
Stars: ✭ 76 (-55.03%)
Mutual labels:  capsule-network
Nn encapsulation
Capsule research with our trivial contribution
Stars: ✭ 53 (-68.64%)
Mutual labels:  capsule-network
Capsnet Traffic Sign Classifier
A Tensorflow implementation of CapsNet(Capsules Net) apply on german traffic sign dataset
Stars: ✭ 166 (-1.78%)
Mutual labels:  capsule-network
Pytorch Asr
ASR with PyTorch
Stars: ✭ 124 (-26.63%)
Mutual labels:  capsule-network
Novel Deep Learning Model For Traffic Sign Detection Using Capsule Networks
capsule networks that achieves outstanding performance on the German traffic sign dataset
Stars: ✭ 88 (-47.93%)
Mutual labels:  capsule-network

Adversarial Attack to Capsule Networks

This repo. is to research adversarial attack performance for CapsNets.

In those days, deep learning has shown an attractive results on several applications (speech / image recognization). However, from 2014, there are research papers that deep learning can be easiy fool by very low noise. In NIPS 2017 workshop, Ian Goodfellow opened adversarial non-targeted/targeted attack and defense competition for this problem, and In ICLR2018, lots of researches for adversarial examples are submitted. In this situation, we read G. Hinton's paper, Dynamic routing between capsules, and tried to validate the robustness of CapsNets on the paper to well-known adversarial attack.

The CapsNets part on this repo. is based on InnerPeace-Wu's ones, https://github.com/InnerPeace-Wu/CapsNet-tensorflow.

Settings

This implementation is on tensorflow 1.2.1, and the detailed setting is followed InnerPeace-Wu's one.

Attackers

Fast Gradient Sign Method (FGSM)

Fast Gradient Sign Method is to generate adversarial images with gradient information from loss function and true label.

https://arxiv.org/pdf/1607.02533.pdf

python attack_gsm.py --max_iter=1

This module is based on gongzhitaao's git and sample attack codes from Google Brain.

https://github.com/gongzhitaao/tensorflow-adversarial

Basic Iteration (basic iter.)

Basic Iteration is to generate adversarial images with iteratively running FGSM.

https://arxiv.org/pdf/1607.02533.pdf

python attack_gsm.py --max_iter=<iter_num>

This module is based on gongzhitaao's git and sample attack codes from Google Brain.

https://github.com/gongzhitaao/tensorflow-adversarial

Least-likely Class Method (step l.l)

Least-likely Class Method is to generate adversarial images by not using true label but "least-likely" label.

https://arxiv.org/pdf/1607.02533.pdf

python attack_llcm.py --max_iter=1

This module is based on gongzhitaao's git and sample attack codes from Google Brain.

https://github.com/gongzhitaao/tensorflow-adversarial

Iterative Least-likely Class Method (iter. l.l)

Iterative Least-likely Class Method is to generate adversarial images with iteratively running step l.l.

https://arxiv.org/pdf/1607.02533.pdf

python attack_llcm.py --max_iter=<iter_num>

This module is based on gongzhitaao's git and sample attack codes from Google Brain.

https://github.com/gongzhitaao/tensorflow-adversarial

Adversarial Attack based on adversarial generative learning(AGL)

This module is to generate adversairal noise can make the model mis-predict data added with that.

In here it, three loss terms are as bellowed.

  1. reverse cross-entropy for true label (rce) => getting insight from FGSM
  2. cross-entropy for the lease label (ce_ll) => getting insight from step l.l
  3. cross-entropy for top-1 selected label exept true label (ce_s) => new things

python attack_gan.py

The Agents

Naiive conv model (Baseline)

This module is not consisted with adversarial prior to defense.

The structure of this is as bellowed.

3 convolutional layers with Relu activation function, the kernel and feature size of which are 3 and [64,128,256], respectively. After 3 conv. layers, 2 linear layers with Relu and softmax activation function are used to get prediction of them.

The accuracy for MNIST original test data is about 99%.

Adversarial Training with adversarial examples from FGSM and Basic iter.

This module is to regularize baseline model with FGSM and Basic iter.

On each epoch, adversarial examples are repeatly generated and learned with the agent.

Adversarial Training with adversarial examples from step l.l and iter. l.l

This module is to regularize baseline model with step l.l.

On each epoch, adversarial examples are repeatly generated and learned with the agent.

Experiments Results (MNIST)

  • CapsNets is learned with 1.5K iteration, and training/test acc. are 0.995/0.991, respectively.

  • Epsilon is calculated with infinity norm

  • Max epoch is 50

  • Max iter. for basic iter. and iter. l.l is 5.

  • gif file is adversarial examples with max_epsilon from 5 to 50

FGSM

alt tag

FGSM 0 5 10 15 20 25 30 35 40 45 50
naiive model 0.990 0.984 0.966 0.941 0.920 0.883 0.826 0.759 0.683 0.606 0.529
adv. Training with FGSM 0.992 0.988 0.984 0.984 0.979 0.975 0.958 0.965 0.960 0.957 0.955
adv. Training with basic iter. 0.992 0.989 0.986 0.977 0.975 0.974 0.971 0.969 0.962 0.964 0.958
adv. Training with step l.l 0.992 0.987 0.978 0.973 0.966 0.958 0.955 0.949 0.928 0.942 0.934
adv. Training with iter l.l 0.992 0.986 0.980 0.967 0.958 0.968 0.961 0.959 0.953 0.946 0.942
CapsNet 0.991 0.975 0.942 0.901 0.854 0.812 0.785 0.765 0.749 0.736 0.721

alt tag

Basic iter.

alt tag

Basic iter. 0 5 10 15 20 25 30 35 40 45 50
naiive model 0.990 0.986 0.974 0.952 0.915 0.866 0.807 0.736 0.681 0.647 0.623
adv. Training with FGSM 0.992 0.989 0.987 0.987 0.982 0.966 0.966 0.960 0.959 0.950 0.944
adv. Training with basic iter. 0.992 0.989 0.989 0.982 0.979 0.977 0.977 0.974 0.971 0.970 0.965
adv. Training with step l.l 0.992 0.989 0.982 0.979 0.973 0.966 0.959 0.946 0.924 0.934 0.915
adv. Training with iter l.l 0.992 0.988 0.984 0.974 0.968 0.972 0.965 0.962 0.955 0.947 0.942
CapsNet 0.991 0.975 0.936 0.877 0.800 0.721 0.647 0.579 0.517 0.465 0.423

alt tag

step l.l

alt tag

step l.l 0 5 10 15 20 25 30 35 40 45 50
naiive model 0.990 0.988 0.982 0.971 0.951 0.923 0.883 0.817 0.740 0.637 0.511
adv. Training with FGSM 0.992 0.991 0.990 0.991 0.988 0.975 0.980 0.983 0.980 0.980 0.978
adv. Training with basic iter. 0.992 0.989 0.990 0.984 0.983 0.984 0.985 0.981 0.981 0.981 0.978
adv. Training with step l.l 0.992 0.989 0.987 0.987 0.985 0.982 0.979 0.973 0.967 0.972 0.970
adv. Training with iter l.l 0.992 0.989 0.988 0.985 0.983 0.977 0.977 0.973 0.971 0.969 0.966
CapsNet 0.991 0.991 0.979 0.946 0.880 0.797 0.702 0.619 0.546 0.484 0.431

alt tag

iter l.l

alt tag

iter l.l 0 5 10 15 20 25 30 35 40 45 50
naiive model 0.990 0.989 0.986 0.979 0.964 0.942 0.897 0.837 0.754 0.650 0.550
adv. Training with FGSM 0.992 0.991 0.991 0.992 0.990 0.980 0.985 0.986 0.985 0.981 0.978
adv. Training with basic iter. 0.992 0.990 0.991 0.987 0.987 0.987 0.988 0.986 0.984 0.985 0.983
adv. Training with step l.l 0.992 0.990 0.990 0.989 0.988 0.987 0.985 0.977 0.973 0.974 0.969
adv. Training with iter l.l 0.992 0.990 0.989 0.987 0.987 0.982 0.982 0.980 0.977 0.976 0.975
CapsNet 0.991 0.991 0.985 0.964 0.933 0.895 0.854 0.819 0.787 0.755 0.733

alt tag

Discussion

CapsNet also fall in the trap of every type adversarial examples. This structure can be one of the hint to solve adversarial problem, however experiment results show CapsNets is not free to adversarial attack.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].