ijcnn19attacksAdversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (+185%)
procedural-advmlTask-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (+135%)
generative adversaryCode for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (+190%)
Adversarial Robustness ToolboxAdversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+13090%)
FoolboxA Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+10440%)
adaptive-segmentation-mask-attackPre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).
Stars: ✭ 50 (+150%)
nn robustness analysisPython tools for analyzing the robustness properties of neural networks (NNs) from MIT ACL
Stars: ✭ 36 (+80%)
advrankAdversarial Ranking Attack and Defense, ECCV, 2020.
Stars: ✭ 19 (-5%)
DiagnoseRESource code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (+15%)
tulipScaleable input gradient regularization
Stars: ✭ 19 (-5%)
perceptual-advexCode and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Stars: ✭ 44 (+120%)
GROOT[ICML 2021] A fast algorithm for fitting robust decision trees. http://proceedings.mlr.press/v139/vos21a.html
Stars: ✭ 15 (-25%)
s-attack[CVPR 2022] S-attack library. Official implementation of two papers "Vehicle trajectory prediction works, but not everywhere" and "Are socially-aware trajectory prediction models really socially-aware?".
Stars: ✭ 51 (+155%)
Attack-ImageNetNo.2 solution of Tianchi ImageNet Adversarial Attack Challenge.
Stars: ✭ 41 (+105%)
code-soupThis is a collection of algorithms and approaches used in the book adversarial deep learning
Stars: ✭ 18 (-10%)
adversarial-attacksCode for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Stars: ✭ 90 (+350%)
sparse-rsSparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
Stars: ✭ 24 (+20%)
avc nips 2018Code to reproduce the attacks and defenses for the entries "JeromeR" in the NIPS 2018 Adversarial Vision Challenge
Stars: ✭ 18 (-10%)
adversarial-recommender-systems-surveyThe goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-…
Stars: ✭ 110 (+450%)
TIGERPython toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+415%)
adv-dnn-ens-malwareadversarial examples, adversarial malware examples, adversarial malware detection, adversarial deep ensemble, Android malware variants
Stars: ✭ 33 (+65%)
PGD-pytorchA pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
Stars: ✭ 83 (+315%)
AWPCodes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Stars: ✭ 114 (+470%)
FLAT[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (+160%)
trojanzooTrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Stars: ✭ 178 (+790%)
chopCHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.
Stars: ✭ 68 (+240%)
hard-label-attackNatural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (+30%)
RobustTrees[ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples
Stars: ✭ 62 (+210%)
KitanaQAKitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+190%)
flowattackAttacking Optical Flow (ICCV 2019)
Stars: ✭ 58 (+190%)
denoised-smoothingProvably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Stars: ✭ 82 (+310%)
rs4aRandomized Smoothing of All Shapes and Sizes (ICML 2020).
Stars: ✭ 47 (+135%)
AdvPCAdvPC: Transferable Adversarial Perturbations on 3D Point Clouds (ECCV 2020)
Stars: ✭ 35 (+75%)
POPQORNAn Algorithm to Quantify Robustness of Recurrent Neural Networks
Stars: ✭ 44 (+120%)
geometric advGeometric Adversarial Attacks and Defenses on 3D Point Clouds (3DV 2021)
Stars: ✭ 20 (+0%)
SimP-GCNImplementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (+115%)
Pro-GNNImplementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+910%)
grbGraph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (+250%)
pre-trainingPre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Stars: ✭ 90 (+350%)
FGSM-KerasImplemention of Fast Gradient Sign Method for generating adversarial examples in Keras
Stars: ✭ 43 (+115%)
NlpaugData augmentation for NLP
Stars: ✭ 2,761 (+13705%)
T3[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (+25%)
square-attackSquare Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+345%)
domain-shift-robustnessCode for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (+10%)