All Projects → ZhaoyangLyu → POPQORN

ZhaoyangLyu / POPQORN

Licence: other
An Algorithm to Quantify Robustness of Recurrent Neural Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to POPQORN

perceptual-advex
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Stars: ✭ 44 (+0%)
Mutual labels:  robustness, adversarial-attacks
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-2.27%)
Mutual labels:  robustness, adversarial-attacks
TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+134.09%)
Mutual labels:  robustness, adversarial-attacks
DiagnoseRE
Source code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (-47.73%)
Mutual labels:  robustness, adversarial-attacks
s-attack
[CVPR 2022] S-attack library. Official implementation of two papers "Vehicle trajectory prediction works, but not everywhere" and "Are socially-aware trajectory prediction models really socially-aware?".
Stars: ✭ 51 (+15.91%)
Mutual labels:  robustness, adversarial-attacks
square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+102.27%)
Mutual labels:  robustness, adversarial-attacks
Neural-Chatbot
A Neural Network based Chatbot
Stars: ✭ 68 (+54.55%)
Mutual labels:  recurrent-neural-networks
Denoised-Smoothing-TF
Minimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.
Stars: ✭ 19 (-56.82%)
Mutual labels:  robustness
Probabilistic-RNN-DA-Classifier
Probabilistic Dialogue Act Classification for the Switchboard Corpus using an LSTM model
Stars: ✭ 22 (-50%)
Mutual labels:  recurrent-neural-networks
Visual-Attention-Model
Chainer implementation of Deepmind's Visual Attention Model paper
Stars: ✭ 27 (-38.64%)
Mutual labels:  recurrent-neural-networks
grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (+59.09%)
Mutual labels:  adversarial-attacks
Pro-GNN
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+359.09%)
Mutual labels:  adversarial-attacks
Deep-Learning-A-Z-Hands-on-Artificial-Neural-Network
Codes and Templates from the SuperDataScience Course
Stars: ✭ 39 (-11.36%)
Mutual labels:  recurrent-neural-networks
LSTM-footballMatchWinner
This repository contains the code for a conference paper "Predicting the football match winner using LSTM model of Recurrent Neural Networks" that we wrote
Stars: ✭ 44 (+0%)
Mutual labels:  recurrent-neural-networks
ViTs-vs-CNNs
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
Stars: ✭ 145 (+229.55%)
Mutual labels:  robustness
generative adversary
Code for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (+31.82%)
Mutual labels:  adversarial-attacks
ACT
Alternative approach for Adaptive Computation Time in TensorFlow
Stars: ✭ 16 (-63.64%)
Mutual labels:  recurrent-neural-networks
Advances-in-Label-Noise-Learning
A curated (most recent) list of resources for Learning with Noisy Labels
Stars: ✭ 360 (+718.18%)
Mutual labels:  robustness
GTAV-Self-driving-car
Self driving car in GTAV with Deep Learning
Stars: ✭ 15 (-65.91%)
Mutual labels:  recurrent-neural-networks
Keras-LSTM-Trajectory-Prediction
A Keras multi-input multi-output LSTM-based RNN for object trajectory forecasting
Stars: ✭ 88 (+100%)
Mutual labels:  recurrent-neural-networks

POPQORN: An Algorithm to Quantify Robustness of Recurrent Neural Networks

POPQORN (Propagated-output Quantified Robustness for RNNs) is a general algorithm to quantify robustness of recurrent neural networks (RNNs), including vanilla RNNs, LSTMs, and GRUs. It provides certified lower bound of minimum adversarial distortion for RNNs by bounding the non-linear activations in RNNs with linear functions. POPQORN is

  • Novel - it is a general framework, which is, to the best of our knowledge, the first work to provide a quantified robustness evaluation with guarantees for RNNs.
  • Effective - it can handle complicated RNN structures besides vanilla RNNs, including LSTMs and GRUs that contain challenging coupled nonlinearities.
  • Versatile - it can be widely applied in applications including but not limited to computer vision, natural language processing, and speech recognition. Its robustness quantification on individual steps in the input sequence can lead to new insights.

This repo intends to release code for our work:

Ching-Yun Ko*, Zhaoyang Lyu*, Tsui-Wei Weng, Luca Daniel, Ngai Wong and Dahua Lin, "POPQORN: Quantifying Robustness of Recurrent Neural Networks", ICML 2019

* Equal contribution

The appendix of the paper is in the file POPQORN/Appendix.pdf.

Updates

  • Jun 7, 2019: Initial release. Release the codes for quantifying robustness of vanilla RNNs and LSTMs.
  • To do: Add descriptions for experimental results.

Setup

The code is tested with python 3.7.3, Pytorch 1.1.0 and CUDA 9.0. Run the following to install pytorch and its CUDA toolkit:

conda install pytorch torchvision cudatoolkit=9.0 -c pytorch

Then clone this repository:

git clone https://github.com/ZhaoyangLyu/POPQORN.git
cd POPQORN

Experiment 1: Quantify Robustness for Vanilla RNN

All experiments of this part will be conducted in the folder vanilla_rnn.

cd vanilla_rnn

Step 1: Train an RNN Mnist Classifier

We first train an RNN to classify images in the Mnist datatset. We will cut each image into several slices and feed them to the RNN. Each slice will be considered as the input of the RNN at each time step. By default, we cut each 28 × 28 image into 7 slices. Each slice is of shape 4 × 28. As for the RNN, we set the hidden size to 64 and use tanh nonlineararity. Run the following command to train the RNN:

python train_rnn_mnist_classifier.py

Run

python train_rnn_mnist_classifier.py --cuda

if you want to enable gpu usage.

The trained model will be save as POPQORN/models/mnist_classifier/rnn_7_64_tanh/rnn.

You can train more different models by changing the number of slices to cut each mnist image into, the hidden size and nonlinearity of the RNN.

Step 2: Compute Certified Bound

For each input sequence, we will compute a certified bound within which the true label output value is larger than the target label output value, namely, the targeted attack won't succeed. Note that in this experiment we have assumed perturbations to be uniform across input frames. Run

python bound_vanilla_rnn.py

to compute the certified bound. By default, this file will read the pretrained model POPQORN/models/mnist_classifier/rnn_7_64_tanh/rnn, randomly sample 100 images, randomly choose target labels , and then compute certifed bounds for them (only for those images that are correctlly classified by the pretrained model). Each certified bound is the radius of the l-p (2-norm by default) ball centered at the original image slice.

The script will print the minimum, mean, maximum and standard deviation of the computed certified bounds at the end. The complete result will be saved to POPQORN/models/mnist_classifier/rnn_7_64_tanh/2_norm_bound/certified_bound.

You can compute bounds for different models by setting the values of the arguments work_dir and model_name. Also remember to set hidden_size, time_step and activation accordingly to make them consistent with your specifed pretrained model.

Experiment 2: Quantify Robustness for LSTM

All experiments of this part will be conducted in the folder lstm.

cd lstm

Step 1: Train an LSTM News Title Classifier

cd to the folder NewsTitleClassification.

We will train an LSTM classifier on TagMyNews dataset. TagMyNews is a dataset consisting of 32,567 English news items grouped into 7 categories: Sport, Business, U.S., Health, Sci&Tech, World, and Entertainment. Each news item has a news title and a short description. We train an LSTM to classify the news items into the 7 categories according to their news titles.

The news items have been preprocessed following the instructions in the github repo https://github.com/isthegeek/News-Classification. The processed data is stored in the file test_data_pytorch.csv and train_data_pytorch.csv.

Before training the LSTM, run the following commands to install the dependencies.

pip install torchtext
pip install spacy
python -m spacy download en

Then run

python train_model.py

to train an LSTM classifier.

Run

python train_model.py --cuda

if you want to enable gpu usage.

The script will download the pretrained word embedding vector from “glove.6B.100d". By default, the trained LSTM will be saved as POPQORN/models/news_title_classifier/lstm_hidden_size_32/lstm.

You can train more different LSTMs by changing the hidden size of the LSTM.

Step 2: Compute Certified Bound

Next, for each news title, we will compute the untargeted POPQORN bound on every individual word, and call the words with minimal bounds sensitive words.​

cd ..

back to the folder lstm.

Run

python bound_news_classification_lstm.py

to compute the certified bound. By default, this file will read the pretrained model POPQORN/models/news_title_classifier/lstm_hidden_size_32/lstm, randomly sample 128 news titles, and then compute certified l-2 balls for them (only for those news that are correctlly classified by the pretrained model). The script will save useful information to the diretory POPQORN/models/news_title_classifier/lstm_hidden_size_32/2_norm. The sampled news titles will be saved as input. The complete log of the computation process will be saved as logs.txt and the computed bound will be saved as bound_eps.

By default, the script will use 2D planes to bound the 2D nonlinear activations in the LSTM. You can also use 1D lines or constants to bound the 2D nonlinear activations by running the following:

python bound_news_classification_lstm.py --use-1D-line

or

python bound_news_classification_lstm.py --use-constant

You can also compute bounds for different models by setting the values of the arguments work_dir and model_name. Also remember to set hidden_size accordingly to make them consistent with your specifed pretrained model.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].