tulipScaleable input gradient regularization
Stars: ✭ 19 (-38.71%)
procedural-advmlTask-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (+51.61%)
Adversarial Robustness ToolboxAdversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+8409.68%)
GROOT[ICML 2021] A fast algorithm for fitting robust decision trees. http://proceedings.mlr.press/v139/vos21a.html
Stars: ✭ 15 (-51.61%)
ijcnn19attacksAdversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (+83.87%)
FeatureScatterFeature Scattering Adversarial Training
Stars: ✭ 64 (+106.45%)
adversarial-attacksCode for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Stars: ✭ 90 (+190.32%)
translearnCode implementation of the paper "With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning", at USENIX Security 2018
Stars: ✭ 18 (-41.94%)
adversarial-code-generationSource code for the ICLR 2021 work "Generating Adversarial Computer Programs using Optimized Obfuscations"
Stars: ✭ 16 (-48.39%)
avc nips 2018Code to reproduce the attacks and defenses for the entries "JeromeR" in the NIPS 2018 Adversarial Vision Challenge
Stars: ✭ 18 (-41.94%)
adversarial-recommender-systems-surveyThe goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-…
Stars: ✭ 110 (+254.84%)
adv-dnn-ens-malwareadversarial examples, adversarial malware examples, adversarial malware detection, adversarial deep ensemble, Android malware variants
Stars: ✭ 33 (+6.45%)
AdverseDriveAttacking Vision based Perception in End-to-end Autonomous Driving Models
Stars: ✭ 24 (-22.58%)
RobustTrees[ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples
Stars: ✭ 62 (+100%)
Adversarial-Patch-TrainingCode for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
Stars: ✭ 30 (-3.23%)
ThermometerEncodingreproduction of Thermometer Encoding: One Hot Way To Resist Adversarial Examples in pytorch
Stars: ✭ 15 (-51.61%)
athenaAthena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks
Stars: ✭ 39 (+25.81%)
denoised-smoothingProvably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Stars: ✭ 82 (+164.52%)
rs4aRandomized Smoothing of All Shapes and Sizes (ICML 2020).
Stars: ✭ 47 (+51.61%)
jpeg-defenseSHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Stars: ✭ 82 (+164.52%)
generative adversaryCode for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (+87.1%)
pre-trainingPre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Stars: ✭ 90 (+190.32%)
synthesizing-robust-adversarial-examplesMy entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-AW
Stars: ✭ 60 (+93.55%)
FGSM-KerasImplemention of Fast Gradient Sign Method for generating adversarial examples in Keras
Stars: ✭ 43 (+38.71%)
EAD AttackEAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Stars: ✭ 34 (+9.68%)
FawkesFawkes, privacy preserving tool against facial recognition systems. More info at https://sandlab.cs.uchicago.edu/fawkes
Stars: ✭ 4,362 (+13970.97%)
advrankAdversarial Ranking Attack and Defense, ECCV, 2020.
Stars: ✭ 19 (-38.71%)
AMRThis is our official implementation for the paper: Jinhui Tang, Xiaoyu Du, Xiangnan He, Fajie Yuan, Qi Tian, and Tat-Seng Chua, Adversarial Training Towards Robust Multimedia Recommender System.
Stars: ✭ 30 (-3.23%)
backdoors101Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
Stars: ✭ 181 (+483.87%)
FoolboxA Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+6700%)
adaptive-segmentation-mask-attackPre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).
Stars: ✭ 50 (+61.29%)