All Projects → nebula-beta → Torchadver

nebula-beta / Torchadver

A PyTorch Toolbox for creating adversarial examples that fool neural networks.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Torchadver

Toolbox
Phodal's Toolbox
Stars: ✭ 873 (+892.05%)
Mutual labels:  toolbox
Spellbook
Micro-framework for rapid development of reusable security tools
Stars: ✭ 53 (-39.77%)
Mutual labels:  toolbox
Cogitare
🔥 Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python
Stars: ✭ 73 (-17.05%)
Mutual labels:  toolbox
Dlt
Deep Learning Toolbox for Torch
Stars: ✭ 20 (-77.27%)
Mutual labels:  toolbox
Ugfraud
An Unsupervised Graph-based Toolbox for Fraud Detection
Stars: ✭ 38 (-56.82%)
Mutual labels:  toolbox
Qtools
QTools collection of open source tools for embedded systems development on Windows, Linux and MacOS
Stars: ✭ 64 (-27.27%)
Mutual labels:  toolbox
Delving Deep Into Gans
Generative Adversarial Networks (GANs) resources sorted by citations
Stars: ✭ 834 (+847.73%)
Mutual labels:  adversarial-networks
Xcore
A collection of hundreds of Swift extensions and components designed to minimize boilerplate to accomplish common tasks with ease.
Stars: ✭ 84 (-4.55%)
Mutual labels:  toolbox
Pimcore Toolbox
Pimcore - Toolbox
Stars: ✭ 46 (-47.73%)
Mutual labels:  toolbox
Man
Multinomial Adversarial Networks for Multi-Domain Text Classification (NAACL 2018)
Stars: ✭ 72 (-18.18%)
Mutual labels:  adversarial-networks
Oc Toolbox Plugin
🧰 Toolbox plugin for October CMS
Stars: ✭ 33 (-62.5%)
Mutual labels:  toolbox
Tofu
Project for an open-source python library for synthetic diagnostics and tomography for Fusion devices
Stars: ✭ 35 (-60.23%)
Mutual labels:  toolbox
Omatsuri
PWA with 12 open source frontend focused tools
Stars: ✭ 1,131 (+1185.23%)
Mutual labels:  toolbox
Uncertainty Toolbox
A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualization
Stars: ✭ 880 (+900%)
Mutual labels:  toolbox
Inverse rl
Adversarial Imitation Via Variational Inverse Reinforcement Learning
Stars: ✭ 79 (-10.23%)
Mutual labels:  adversarial-networks
Xinshuo pytoolbox
A Python toolbox that contains common help functions for stream I/O, image & video processing, and visualization. All my projects depend on this toolbox.
Stars: ✭ 25 (-71.59%)
Mutual labels:  toolbox
Robust Adv Malware Detection
Code repository for the paper "Adversarial Deep Learning for Robust Detection of Binary Encoded Malware"
Stars: ✭ 63 (-28.41%)
Mutual labels:  adversarial-networks
Stable Baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
Stars: ✭ 1,263 (+1335.23%)
Mutual labels:  toolbox
Krypton Net 5.470
A update to Component factory's krypton toolkit to support the .NET 4.7 framework.
Stars: ✭ 79 (-10.23%)
Mutual labels:  toolbox
Andes
Python toolbox / library for power system transient dynamics simulation with symbolic modeling and numerical analysis 🔥
Stars: ✭ 68 (-22.73%)
Mutual labels:  toolbox

Introduction

torchadver is a Pytorch tool box for generating adversarial images. The basic adversarial attack are implemented. Such as FSGM, I-FGSM, MI-FGSM, M-DI-FGSM, C&W .etc.

Installation

How to Use

The brief attack process is shown below. More detailed process introduction you can refer to ./examples/toturial.py.

Generate adversarial images by satisfy L2 norm

Non-targeted attack

from torchadver.attacker.iterative_gradient_attack import FGM_L2, I_FGM_L2, MI_FGM_L2, M_DI_FGM_L2


mean = [0.5, 0.5, 0.5]
std = [0.5, 0.5, 0.5]

# images normalized by mean and std
images, labels = ...
model = ...

# use mean and std to determine effective range of pixel of image in channels.
attacker = FGM_L2(model, loss_fn=nn.CrossEntropyLoss(),
				  mean=mean, std=std, 
				  max_norm=4.0, # L2 norm bound
				  random_init=True)

# for non-targeted attack
adv_images = attacker.attack(images, labels) # or adv_images = attacker.attack(images)

Targeted attack

from torchadver.attacker.iterative_gradient_attack import FGM_L2, I_FGM_L2, MI_FGM_L2, M_DI_FGM_L2


mean = [0.5, 0.5, 0.5]
std = [0.5, 0.5, 0.5]

# images normalized by mean and std
images, labels = ...
model = ...
targeted_labels = ...

# use mean and std to determine effective range of pixel of image in channels.
attacker = FGM_L2(model, loss_fn=nn.CrossEntropyLoss(),
				  mean=mean, std=std, 
				  max_norm=4.0, # L2 norm bound
				  random_init=True)

# for non-targeted attack
adv_images = attacker.attack(images, targeted_labels)

Generate adversarial images by satisfy Linf norm

Non-targeted attack

from torchadver.attacker.iterative_gradient_attack import FGM_LInf, I_FGM_LInf, MI_FGM_LInf, M_DI_FGM_LInf


mean = [0.5, 0.5, 0.5]
std = [0.5, 0.5, 0.5]

# images normalized by mean and std
images, labels = ...
model = ...

# use mean and std to determine effective range of pixel of image in channels.
attacker = FGM_L2(model, loss_fn=nn.CrossEntropyLoss(),
				 mean=mean, std=std,
				 max_norm=0.1, # Linf norm bound
				 random_init=True)

# for non-targeted attack
adv_images = attacker.attack(images, labels) # or adv_images = attacker.attack(images)

Targeted attack

from torchadver.attacker.iterative_gradient_attack import FGM_LInf, I_FGM_LInf, MI_FGM_LInf, M_DI_FGM_LInf


mean = [0.5, 0.5, 0.5]
std = [0.5, 0.5, 0.5]

# images normalized by mean and std
images, labels = ...
model = ...
targeted_labels = ...

# use mean and std to determine effective range of pixel of image in channels.
attacker = FGM_L2(model, loss_fn=nn.CrossEntropyLoss(),
				 mean=mean, std=std,
				 max_norm=0.1, # Linf norm bound
				 random_init=True, targeted=True)

# for non-targeted attack
adv_images = attacker.attack(images, targeted_labels)

Citations

More information about adversarial attack about deep learning, refer to awesome-adversarial-deep-learning.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].