All Projects → Haichao-Zhang → FeatureScatter

Haichao-Zhang / FeatureScatter

Licence: other
Feature Scattering Adversarial Training

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to FeatureScatter

Adversarial-Distributional-Training
Adversarial Distributional Training (NeurIPS 2020)
Stars: ✭ 52 (-18.75%)
Mutual labels:  adversarial-machine-learning, adversarial-training
Adversarial-Patch-Training
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
Stars: ✭ 30 (-53.12%)
Mutual labels:  adversarial-machine-learning, adversarial-training
jpeg-defense
SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression
Stars: ✭ 82 (+28.13%)
Mutual labels:  defense, adversarial-machine-learning
EAD Attack
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Stars: ✭ 34 (-46.87%)
Mutual labels:  defense, adversarial-machine-learning
gamechanger
GAMECHANGER aspires to be the Department’s trusted solution for evidence-based, data-driven decision-making across the universe of DoD requirements
Stars: ✭ 27 (-57.81%)
Mutual labels:  defense
athena
Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks
Stars: ✭ 39 (-39.06%)
Mutual labels:  adversarial-machine-learning
Portforge
Lightweight utility to fool port scanners
Stars: ✭ 23 (-64.06%)
Mutual labels:  defense
perceptron-benchmark
Robustness benchmark for DNN models.
Stars: ✭ 61 (-4.69%)
Mutual labels:  adversarial-machine-learning
adversarial-code-generation
Source code for the ICLR 2021 work "Generating Adversarial Computer Programs using Optimized Obfuscations"
Stars: ✭ 16 (-75%)
Mutual labels:  adversarial-machine-learning
AdMRL
Code for paper "Model-based Adversarial Meta-Reinforcement Learning" (https://arxiv.org/abs/2006.08875)
Stars: ✭ 30 (-53.12%)
Mutual labels:  adversarial-training
procedural-advml
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (-26.56%)
Mutual labels:  adversarial-machine-learning
cloudrasp-log4j2
一个针对防御 log4j2 CVE-2021-44228 漏洞的 RASP 工具。 A Runtime Application Self-Protection module specifically designed for log4j2 RCE (CVE-2021-44228) defense.
Stars: ✭ 105 (+64.06%)
Mutual labels:  defense
AdverseDrive
Attacking Vision based Perception in End-to-end Autonomous Driving Models
Stars: ✭ 24 (-62.5%)
Mutual labels:  adversarial-machine-learning
denoised-smoothing
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Stars: ✭ 82 (+28.13%)
Mutual labels:  adversarial-robustness
adversarial-recommender-systems-survey
The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-…
Stars: ✭ 110 (+71.88%)
Mutual labels:  adversarial-machine-learning
TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+60.94%)
Mutual labels:  defense
MSF-Self-Defence
Self defense post module for metasploit
Stars: ✭ 18 (-71.87%)
Mutual labels:  defense
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-9.37%)
Mutual labels:  adversarial-training
AWP
Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Stars: ✭ 114 (+78.13%)
Mutual labels:  adversarial-training
translearn
Code implementation of the paper "With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning", at USENIX Security 2018
Stars: ✭ 18 (-71.87%)
Mutual labels:  adversarial-machine-learning

Feature Scattering Adversarial Training (NeurIPS 2019)

Introduction

This is the implementation of the "Feature-Scattering Adversarial Training", which is a training method for improving model robustness against adversarial attacks. It advocates the usage of an unsupervised feature-scattering procedure for adversarial perturbation generation, which is effective for overcoming label leaking and improving model robustness. More information can be found on the project page: https://sites.google.com/site/hczhang1/projects/feature_scattering

Usage

Installation

The training environment (PyTorch and dependencies) can be installed as follows:

git clone https://github.com/Haichao-Zhang/FeatureScatter.git
cd FeatureScatter

python3 -m venv .venv
source .venv/bin/activate

python3 setup.py install

(or pip install -e .)

Tested under Python 3.5.2 and PyTorch 1.2.0.

Train

Specify the path for saving the trained models in fs_train.sh, and then run

sh ./fs_train.sh

Evaluate

Specify the path to the trained models to be evaluated in fs_eval.sh and then run

sh ./fs_eval.sh

Reference Model

A reference model trained on CIFAR10 is here.

Cite

If you find this work is useful, please cite the following:

@inproceedings{feature_scatter,
    author = {Haichao Zhang and Jianyu Wang},
    title  = {Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training},
    booktitle = {Advances in Neural Information Processing Systems},
    year = {2019}
}

Contact

For questions related to feature-scattering, please send me an email: [email protected]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].