All Projects → BardOfCodes → adv_summaries

BardOfCodes / adv_summaries

Licence: other
Short Summaries for papers in Adversarial Attacks and Defenses. Linked to a related blog post:

Summary of Adversarial Works

This repository contains one-file-quick-summaries of works from CVPR which deal with adversarial attack or defense. The list might be incomplete! In case you find a paper which I missed out, let me know!

This post is linked to my blog post, where draw the broad scope and direction of adversarial works in CVPR 2019, and also briefly introduce few of the interesting results from CVPR 2019.

TODO:

  1. Finish All CVPR 2019 papers

  2. Extend to a gh-pages website.

Note: The Adversarial works I am referring of here are related to robustness of deep neural networks, i.e. Attacks and Defenses. This repository is not about Generative Adversarial Networks.

CVPR 2017

  1. Universal Adversarial perturbations, Seyed-Mohsen Moosavi-Dezfooli et al.

CVPR 2018

  1. Defense Against Universal Adversarial Perturbations

  2. Generative Adversarial Perturbations

  3. Art of Singular Vectors and Universal Adversarial Perturbations

  4. Deflecting Adversarial Attacks With Pixel Deflection

  5. On the Robustness of Semantic Segmentation Models to Adversarial Attacks

  6. Robust Physical-World Attacks on Deep Learning Visual Classification

  7. Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser

  8. Boosting Adversarial Attacks With Momentum

  9. NAG: Network for Adversary Generation

CVPR 2019

Defense Based TODO:

  1. Adversarial Defense by Stratified Convolutional Sparse Coding

  2. Feature Denoising for Improving Adversarial Robustness

  3. Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack

  4. Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

  5. Adversarial Defense Through Network Profiling Based Path Extraction

  6. Detection Based Defense Against Adversarial Examples From the Steganalysis Point of View

  7. ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

  8. Barrage of Random Transforms for Adversarially Robust Defense

  9. Defending Against Adversarial Attacks by Randomized Diversification

  10. A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations

  11. ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness

Attack Based TODO:

  1. Trust Region Based Adversarial Attack on Neural Networks

  2. Curls & Whey: Boosting Black-Box Adversarial Attacks

  3. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

  4. SparseFool: a few pixels make a big difference.

  5. Catastrophic Child's Play: Easy to Perform, Hard to Defend Adversarial Attacks

  6. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses

  7. Feature Space Perturbations Yield More Transferable Adversarial Examples

Attack on different tasks:

  1. AIRD: Adversarial Learning Framework for Image Repurposing Detection

  2. Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search

  3. Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

  4. Fooling automated surveillance cameras: adversarial patches to attack person detection

  5. Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables

  6. Retrieval-Augmented Convolutional Neural Networks Against Adversarial Examples

Attacking 3D data TODO:

  1. Adversarial Attacks Beyond the Image Space

  2. Generating 3D Adversarial Point Clouds

  3. Realistic Adversarial Examples in 3D Meshes

  4. MeshAdv: Adversarial Meshes for Visual Recognition

  5. Robustness of 3D Deep Learning in an Adversarial Setting

  6. Strike (With) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects

On Explaining Adversarial A&D TODO:

  1. What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?

  2. Disentangling Adversarial Robustness and Generalization

  3. Robustness via Curvature Regularization, and Vice Versa

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].