NlpaugData augmentation for NLP
Stars: ✭ 2,761 (+3226.51%)
Mutual labels: adversarial-attacks
geometric advGeometric Adversarial Attacks and Defenses on 3D Point Clouds (3DV 2021)
Stars: ✭ 20 (-75.9%)
Mutual labels: adversarial-attacks
chopCHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.
Stars: ✭ 68 (-18.07%)
Mutual labels: adversarial-attacks
Adversarial Robustness ToolboxAdversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+3078.31%)
Mutual labels: adversarial-attacks
SimP-GCNImplementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-48.19%)
Mutual labels: adversarial-attacks
AdvPCAdvPC: Transferable Adversarial Perturbations on 3D Point Clouds (ECCV 2020)
Stars: ✭ 35 (-57.83%)
Mutual labels: adversarial-attacks
square-attackSquare Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+7.23%)
Mutual labels: adversarial-attacks
FLAT[ICCV2021 Oral] Fooling LiDAR by Attacking GPS Trajectory
Stars: ✭ 52 (-37.35%)
Mutual labels: adversarial-attacks
generative adversaryCode for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (-30.12%)
Mutual labels: adversarial-attacks
hard-label-attackNatural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (-68.67%)
Mutual labels: adversarial-attacks
grbGraph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (-15.66%)
Mutual labels: adversarial-attacks
Pro-GNNImplementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+143.37%)
Mutual labels: adversarial-attacks
flowattackAttacking Optical Flow (ICCV 2019)
Stars: ✭ 58 (-30.12%)
Mutual labels: adversarial-attacks
FoolboxA Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+2439.76%)
Mutual labels: adversarial-attacks
trojanzooTrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Stars: ✭ 178 (+114.46%)
Mutual labels: adversarial-attacks
T3[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-69.88%)
Mutual labels: adversarial-attacks
POPQORNAn Algorithm to Quantify Robustness of Recurrent Neural Networks
Stars: ✭ 44 (-46.99%)
Mutual labels: adversarial-attacks
AWPCodes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Stars: ✭ 114 (+37.35%)
Mutual labels: adversarial-attacks
procedural-advmlTask-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (-43.37%)
Mutual labels: adversarial-attacks
KitanaQAKitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-30.12%)
Mutual labels: adversarial-attacks