All Projects → advboxes → perceptron-benchmark

advboxes / perceptron-benchmark

Licence: Apache-2.0 license
Robustness benchmark for DNN models.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to perceptron-benchmark

fahbench
Folding@home GPU benchmark
Stars: ✭ 32 (-47.54%)
Mutual labels:  benchmarking
synthesizing-robust-adversarial-examples
My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-AW
Stars: ✭ 60 (-1.64%)
Mutual labels:  adversarial-machine-learning
plf nanotimer
A simple C++ 03/11/etc timer class for ~microsecond-precision cross-platform benchmarking. The implementation is as limited and as simple as possible to create the lowest amount of overhead.
Stars: ✭ 108 (+77.05%)
Mutual labels:  benchmarking
esperf
ElasticSearch Performance Testing tool
Stars: ✭ 50 (-18.03%)
Mutual labels:  benchmarking
nitroml
NitroML is a modular, portable, and scalable model-quality benchmarking framework for Machine Learning and Automated Machine Learning (AutoML) pipelines.
Stars: ✭ 40 (-34.43%)
Mutual labels:  benchmarking
lein-jmh
Leiningen plugin for jmh-clojure
Stars: ✭ 17 (-72.13%)
Mutual labels:  benchmarking
Pyperform
An easy and convienent way to performance test python code.
Stars: ✭ 221 (+262.3%)
Mutual labels:  benchmarking
php-bench
⏰ Tools for benchmark PHP algorithms.
Stars: ✭ 32 (-47.54%)
Mutual labels:  benchmarking
dyngen
Simulating single-cell data using gene regulatory networks 📠
Stars: ✭ 59 (-3.28%)
Mutual labels:  benchmarking
ILAMB
Python software used in the International Land Model Benchmarking (ILAMB) project
Stars: ✭ 28 (-54.1%)
Mutual labels:  benchmarking
EAD Attack
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Stars: ✭ 34 (-44.26%)
Mutual labels:  adversarial-machine-learning
cb-tumblebug
Cloud-Barista Multi-Cloud Infra Management Framework
Stars: ✭ 33 (-45.9%)
Mutual labels:  benchmarking
chest xray 14
Benchmarks on NIH Chest X-ray 14 dataset
Stars: ✭ 67 (+9.84%)
Mutual labels:  benchmarking
go-recipes
🦩 Tools for Go projects
Stars: ✭ 2,490 (+3981.97%)
Mutual labels:  benchmarking
benchmark-thrift
An open source application designed to load test Thrift applications
Stars: ✭ 41 (-32.79%)
Mutual labels:  benchmarking
Bombardier
Fast cross-platform HTTP benchmarking tool written in Go
Stars: ✭ 2,952 (+4739.34%)
Mutual labels:  benchmarking
benchmark VAE
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Stars: ✭ 1,211 (+1885.25%)
Mutual labels:  benchmarking
betsy
betsy (BPEL/BPMN Engine Test System) - A BPEL/BPMN Conformance Test Suite and Tool
Stars: ✭ 20 (-67.21%)
Mutual labels:  benchmarking
bench-show
Show, plot and compare benchmark results
Stars: ✭ 14 (-77.05%)
Mutual labels:  benchmarking
ldbc snb docs
Specification of the LDBC Social Network Benchmark suite
Stars: ✭ 39 (-36.07%)
Mutual labels:  benchmarking

Perceptron Robustness Benchmark

Perceptron is a robustness benchmark for computer vision DNN models. It supports both image classification and object detection models on PyTorch, Tensorflow, Keras, PaddlePaddle (in progress), as well as cloud APIs. Perceptron inherits the design from foolbox, and is designed to be agnostic to the deep learning frameworks the models are built on.

Documentation is available on readthedoc

Currently, you can use Perceptron either through its python API or its command line tool.

Getting Started

Installation

The PyTorch and Tensorflow packages are required if you want to test their models, we requir user to manually install them on demand. Otherwise, run the following command to install Perceptron Benchmark

pip install -e .

Running Examples via Command Lines

In the docker shell, run the test through Perceptron command line interface

python perceptron/launcher.py \
    --framework keras \
    --model resnet50 \
    --metric carlini_wagner_l2 \
    --image example.png

In the example, user specifies framework as keras, the model as resnet50, the metric as carlini_wagner_l2, the input image as example.png. The output would be as follows.

To visualize the adversary, we also provide the plot of the original image, adversary image, and their difference as follows.

You can try different combinations of frameworks, models, criteria, and metrics. To see more options using -h for help message.

python perceptron/launcher.py -h

Docker Quick Start

Build the docker image and all dependencies will be installed automatically.

nvidia-docker build -t perceptron:env .

Keras: ResNet50 - C&W2 Benchmarking

The following example serves the same purpose as the command line example. This example benchmarks the robustness of Keras ResNet50 model against C&W2 metric by measuring the minimal required :math:L2 perturbation for a CW2 to success. The minimum Mean Squred Distance (MSE) will be logged.

import numpy as np
import keras.applications as models
from perceptron.models.classification.keras import KerasModel
from perceptron.utils.image import imagenet_example
from perceptron.benchmarks.carlini_wagner import CarliniWagnerL2Metric
from perceptron.utils.criteria.classification import Misclassification

# instantiate the model from keras applications
resnet50 = models.ResNet50(weights='imagenet')

# initialize the KerasModel
# keras resnet50 has input bound (0, 255)
preprocessing = (np.array([104, 116, 123]), 1)  # the mean and stv of the whole dataset
kmodel = KerasModel(resnet50, bounds=(0, 255), preprocessing=preprocessing)

# get source image and label
# the model expects values in [0, 255], and channles_last
image, _ = imagenet_example(data_format='channels_last')
label = np.argmax(kmodel.predictions(image))

metric = CarliniWagnerL2Metric(kmodel, criterion=Misclassification())

adversary = metric(image, label, unpack=False)

Running the example will give you the minimal MSE required for C&W2 to fool resnet50 model (i.e., changing the predicted label).

Acknowledgements

Perceptron Robustness Benchmark would not have been possible without the foolbox projects authored by @Jonas and @Wieland. Thanks for the code and inspiration!

Contributing

You are welcome to send pull requests and report issues on GitHub or iCode. Note that the Perceptron Benchmark project follows the Git flow development model.

Authors

Steering Committee

License

Perceptron Robustness Benchmark is provided under Apache-2.0 license. For a copy, see LICENSE file.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].