All Projects → kenny-co → procedural-advml

kenny-co / procedural-advml

Licence: MIT license
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)

Programming Languages

Jupyter Notebook
11667 projects
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to procedural-advml

Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+5512.77%)
Mutual labels:  adversarial-machine-learning, adversarial-examples, adversarial-attacks
sparksl-noise
minimum proof of concept about procedural noise generation in SparkAR's shader language (SparkSL).
Stars: ✭ 16 (-65.96%)
Mutual labels:  noise, procedural-noise-functions, procedural-noise
Foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+4385.11%)
Mutual labels:  adversarial-examples, adversarial-attacks
adversarial-recommender-systems-survey
The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-…
Stars: ✭ 110 (+134.04%)
Mutual labels:  adversarial-machine-learning, adversarial-attacks
awesome-machine-learning-reliability
A curated list of awesome resources regarding machine learning reliability.
Stars: ✭ 31 (-34.04%)
Mutual labels:  adversarial-machine-learning, adversarial-examples
square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+89.36%)
Mutual labels:  adversarial-attacks, black-box-attacks
ijcnn19attacks
Adversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (+21.28%)
Mutual labels:  adversarial-examples, adversarial-attacks
tulip
Scaleable input gradient regularization
Stars: ✭ 19 (-59.57%)
Mutual labels:  adversarial-machine-learning, adversarial-examples
robust-local-lipschitz
A Closer Look at Accuracy vs. Robustness
Stars: ✭ 75 (+59.57%)
Mutual labels:  adversarial-machine-learning, adversarial-examples
UE4-Noise-BlueprintLibrary
UE4 plugin: Noise Blueprint Function Library
Stars: ✭ 25 (-46.81%)
Mutual labels:  noise, perlin-noise
Noisy-Nodes
Adds various noise generation nodes to Unity Shader Graph, including 3D noise nodes.
Stars: ✭ 186 (+295.74%)
Mutual labels:  noise, perlin-noise
domain-shift-robustness
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (-53.19%)
Mutual labels:  adversarial-attacks, black-box-attacks
sparse-rs
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
Stars: ✭ 24 (-48.94%)
Mutual labels:  adversarial-attacks, black-box-attacks
Adversarial-Examples-Paper
Paper list of Adversarial Examples
Stars: ✭ 20 (-57.45%)
Mutual labels:  adversarial-examples, adversarial-attacks
advrank
Adversarial Ranking Attack and Defense, ECCV, 2020.
Stars: ✭ 19 (-59.57%)
Mutual labels:  adversarial-machine-learning, adversarial-attacks
generative adversary
Code for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (+23.4%)
Mutual labels:  adversarial-examples, adversarial-attacks
perlin-toolkit
Animated perlin noise textures
Stars: ✭ 15 (-68.09%)
Mutual labels:  noise, perlin-noise
awesome-secure-computation
Awesome list for cryptographic secure computation paper. This repo includes *Lattice*, *DifferentialPrivacy*, *MPC* and also a comprehensive summary for top conferences.
Stars: ✭ 125 (+165.96%)
Mutual labels:  papers
Filament
Interactive Music Visualizer
Stars: ✭ 22 (-53.19%)
Mutual labels:  noise
rs4a
Randomized Smoothing of All Shapes and Sizes (ICML 2020).
Stars: ✭ 47 (+0%)
Mutual labels:  adversarial-examples

Procedural Noise UAPs

This repository contains sample code and an interactive Jupyter notebook for the papers:

In this work, we show that Universal Adversarial Perturbations (UAPs) can be generated with procedural noise functions without any knowledge of the target model. Procedural noise functions are fast and lightweight methods for generating textures in computer graphics, this enables low cost black-box attacks on deep convolutional networks for computer vision tasks.

We encourage you to explore our Python notebooks and make your own adversarial examples:

  1. intro_bopt.ipynb shows how Bayesian optimization can find better parameters for the procedural noise functions.

  2. intro_gabor.ipynb gives a brief introduction to Gabor noise. slider

  3. slider_gabor.ipynb, slider_perlin visualize and interactively play with the parameters to see how it affects model predictions. slider

See our paper for more details: "Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks." Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu. CCS 2019.

Acknowledgments

Learn more about the Resilient Information Systems Security (RISS) group at Imperial College London. Kenneth Co is partially supported by DataSpartan.

If you find this project useful in your research, please consider citing:

@inproceedings{co2019procedural,
 author = {Co, Kenneth T. and Mu\~{n}oz-Gonz\'{a}lez, Luis and de Maupeou, Sixte and Lupu, Emil C.},
 title = {Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks},
 booktitle = {Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security},
 series = {CCS '19},
 year = {2019},
 isbn = {978-1-4503-6747-9},
 location = {London, United Kingdom},
 pages = {275--289},
 numpages = {15},
 url = {http://doi.acm.org/10.1145/3319535.3345660},
 doi = {10.1145/3319535.3345660},
 acmid = {3345660},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {adversarial machine learning, bayesian optimization, black-box attacks, deep neural networks, procedural noise, universal adversarial perturbations},
}

This project is licensed under the MIT License, see the LICENSE.md file for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].