All Projects → bethgelab → Foolbox

bethgelab / Foolbox

Licence: mit
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects

Projects that are alternatives of or similar to Foolbox

procedural-advml
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (-97.77%)
Mutual labels:  adversarial-examples, adversarial-attacks
Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+25.14%)
Mutual labels:  adversarial-examples, adversarial-attacks
ijcnn19attacks
Adversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (-97.3%)
Mutual labels:  adversarial-examples, adversarial-attacks
generative adversary
Code for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (-97.25%)
Mutual labels:  adversarial-examples, adversarial-attacks
Adversarial-Examples-Paper
Paper list of Adversarial Examples
Stars: ✭ 20 (-99.05%)
Mutual labels:  adversarial-examples, adversarial-attacks
robust-ood-detection
Robust Out-of-distribution Detection in Neural Networks
Stars: ✭ 55 (-97.39%)
Mutual labels:  adversarial-attacks
jax-resnet
Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).
Stars: ✭ 61 (-97.11%)
Mutual labels:  jax
brax
Massively parallel rigidbody physics simulation on accelerator hardware.
Stars: ✭ 1,208 (-42.69%)
Mutual labels:  jax
adversarial-vision-challenge
NIPS Adversarial Vision Challenge
Stars: ✭ 39 (-98.15%)
Mutual labels:  adversarial-examples
Flax
Flax is a neural network library for JAX that is designed for flexibility.
Stars: ✭ 2,447 (+16.08%)
Mutual labels:  jax
Pyprobml
Python code for "Machine learning: a probabilistic perspective" (2nd edition)
Stars: ✭ 4,197 (+99.1%)
Mutual labels:  jax
treeo
A small library for creating and manipulating custom JAX Pytree classes
Stars: ✭ 29 (-98.62%)
Mutual labels:  jax
advrank
Adversarial Ranking Attack and Defense, ECCV, 2020.
Stars: ✭ 19 (-99.1%)
Mutual labels:  adversarial-attacks
domain-shift-robustness
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (-98.96%)
Mutual labels:  adversarial-attacks
SymJAX
Documentation:
Stars: ✭ 103 (-95.11%)
Mutual labels:  jax
Trax
Trax — Deep Learning with Clear Code and Speed
Stars: ✭ 6,666 (+216.22%)
Mutual labels:  jax
get-started-with-JAX
The purpose of this repo is to make it easy to get started with JAX, Flax, and Haiku. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well as the content I found useful while learning about the JAX ecosystem.
Stars: ✭ 229 (-89.14%)
Mutual labels:  jax
adaptive-segmentation-mask-attack
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).
Stars: ✭ 50 (-97.63%)
Mutual labels:  adversarial-examples
T3
[EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Qian Chen, Shuohang Wang, Bo Li
Stars: ✭ 25 (-98.81%)
Mutual labels:  adversarial-attacks
score flow
Official code for "Maximum Likelihood Training of Score-Based Diffusion Models", NeurIPS 2021 (spotlight)
Stars: ✭ 49 (-97.68%)
Mutual labels:  jax
https://readthedocs.org/projects/foolbox/badge/?version=latest

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Foolbox is a Python library that lets you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in PyTorch, TensorFlow, and JAX.

🔥 Design

Foolbox 3 a.k.a. Foolbox Native has been rewritten from scratch using EagerPy instead of NumPy to achieve native performance on models developed in PyTorch, TensorFlow and JAX, all with one code base without code duplication.

  • Native Performance: Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, and JAX and comes with real batch support.
  • State-of-the-art attacks: Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks.
  • Type Checking: Catch bugs before running your code thanks to extensive type annotations in Foolbox.

📖 Documentation

  • Guide: The best place to get started with Foolbox is the official guide.
  • Tutorial: If you are looking for a tutorial, check out this Jupyter notebook colab .
  • Documentation: The API documentation can be found on ReadTheDocs.

🚀 Quickstart

pip install foolbox

Foolbox requires Python 3.6 or newer. To use it with PyTorch, TensorFlow, or JAX, the respective framework needs to be installed separately. These frameworks are not declared as dependencies because not everyone wants to use and thus install all of them and because some of these packages have different builds for different architectures and CUDA versions. Besides that, all essential dependencies are automatically installed.

You can see the versions we currently use for testing in the Compatibility section below, but newer versions are in general expected to work.

🎉 Example

import foolbox as fb

model = ...
fmodel = fb.PyTorchModel(model, bounds=(0, 1))

attack = fb.attacks.LinfPGD()
epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]
_, advs, success = attack(fmodel, images, labels, epsilons=epsilons)

More examples can be found in the examples folder, e.g. a full ResNet-18 example.

📄 Citation

If you use Foolbox for your work, please cite our JOSS paper on Foolbox Native and our ICML workshop paper on Foolbox using the following BibTeX entries:

@article{rauber2017foolboxnative,
  doi = {10.21105/joss.02607},
  url = {https://doi.org/10.21105/joss.02607},
  year = {2020},
  publisher = {The Open Journal},
  volume = {5},
  number = {53},
  pages = {2607},
  author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel},
  title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX},
  journal = {Journal of Open Source Software}
}
@inproceedings{rauber2017foolbox,
  title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models},
  author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias},
  booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning},
  year={2017},
  url={http://arxiv.org/abs/1707.04131},
}

👍 Contributions

We welcome contributions of all kind, please have a look at our development guidelines. In particular, you are invited to contribute new adversarial attacks. If you would like to help, you can also have a look at the issues that are marked with contributions welcome.

💡 Questions?

If you have a question or need help, feel free to open an issue on GitHub. Once GitHub Discussions becomes publically available, we will switch to that.

💨 Performance

Foolbox Native is much faster than Foolbox 1 and 2. A basic performance comparison can be found in the performance folder.

🐍 Compatibility

We currently test with the following versions:

  • PyTorch 1.4.0
  • TensorFlow 2.1.0
  • JAX 0.1.57
  • NumPy 1.18.1
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].