All Projects → openopt → chop

openopt / chop

Licence: other
CHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to chop

nn robustness analysis
Python tools for analyzing the robustness properties of neural networks (NNs) from MIT ACL
Stars: ✭ 36 (-47.06%)
Mutual labels:  adversarial-attacks
grb
Graph Robustness Benchmark: A scalable, unified, modular, and reproducible benchmark for evaluating the adversarial robustness of Graph Machine Learning.
Stars: ✭ 70 (+2.94%)
Mutual labels:  adversarial-attacks
POPQORN
An Algorithm to Quantify Robustness of Recurrent Neural Networks
Stars: ✭ 44 (-35.29%)
Mutual labels:  adversarial-attacks
domain-shift-robustness
Code for the paper "Addressing Model Vulnerability to Distributional Shifts over Image Transformation Sets", ICCV 2019
Stars: ✭ 22 (-67.65%)
Mutual labels:  adversarial-attacks
Foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Stars: ✭ 2,108 (+3000%)
Mutual labels:  adversarial-attacks
Pro-GNN
Implementation of the KDD 2020 paper "Graph Structure Learning for Robust Graph Neural Networks"
Stars: ✭ 202 (+197.06%)
Mutual labels:  adversarial-attacks
robust-ood-detection
Robust Out-of-distribution Detection in Neural Networks
Stars: ✭ 55 (-19.12%)
Mutual labels:  adversarial-attacks
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (-14.71%)
Mutual labels:  adversarial-attacks
Adversarial Robustness Toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Stars: ✭ 2,638 (+3779.41%)
Mutual labels:  adversarial-attacks
geometric adv
Geometric Adversarial Attacks and Defenses on 3D Point Clouds (3DV 2021)
Stars: ✭ 20 (-70.59%)
Mutual labels:  adversarial-attacks
square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Stars: ✭ 89 (+30.88%)
Mutual labels:  adversarial-attacks
Nlpaug
Data augmentation for NLP
Stars: ✭ 2,761 (+3960.29%)
Mutual labels:  adversarial-attacks
SimP-GCN
Implementation of the WSDM 2021 paper "Node Similarity Preserving Graph Convolutional Networks"
Stars: ✭ 43 (-36.76%)
Mutual labels:  adversarial-attacks
Adversarial-Examples-Paper
Paper list of Adversarial Examples
Stars: ✭ 20 (-70.59%)
Mutual labels:  adversarial-attacks
AdvPC
AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds (ECCV 2020)
Stars: ✭ 35 (-48.53%)
Mutual labels:  adversarial-attacks
advrank
Adversarial Ranking Attack and Defense, ECCV, 2020.
Stars: ✭ 19 (-72.06%)
Mutual labels:  adversarial-attacks
MCS2018 Solution
No description or website provided.
Stars: ✭ 16 (-76.47%)
Mutual labels:  adversarial-attacks
hard-label-attack
Natural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (-61.76%)
Mutual labels:  adversarial-attacks
flowattack
Attacking Optical Flow (ICCV 2019)
Stars: ✭ 58 (-14.71%)
Mutual labels:  adversarial-attacks
generative adversary
Code for the unrestricted adversarial examples paper (NeurIPS 2018)
Stars: ✭ 58 (-14.71%)
Mutual labels:  adversarial-attacks

pytorCH OPtimize (CHOP): a library for continuous and constrained optimization built on PyTorch

...with applications to adversarially attacking and training neural networks.

Build Status Coverage Status DOI

⚠️ This library is in early development, API might change without notice. The examples will be kept up to date. ⚠️

Stochastic Algorithms

We define stochastic optimizers in the chop.stochastic module. These follow PyTorch Optimizer conventions, similar to the torch.optim module. These can be used to

  • train structured models;
  • compute universal adversarial perturbations over a dataset.

Full Gradient Algorithms

We also define full-gradient algorithms which operate on a batch of optimization problems in the chop.optim module. These are used for adversarial attacks, using the chop.Adversary wrapper.

Installing

Run the following:

pip install chop-pytorch

or

pip install git+https://github.com/openopt/chop.git

for the latest development version.

Welcome to chop!

Examples:

See examples directory and our webpage.

Tests

Run the tests with pytests tests.

Citing

If this software is useful to your research, please consider citing it as

@article{chop,
  author       = {Geoffrey Negiar, Fabian Pedregosa},
  title        = {CHOP: continuous optimization built on Pytorch},
  year         = 2020,
  url          = {https://github.com/openopt/chop}
}

Affiliations

Geoffrey Négiar is in the Mahoney lab and the El Ghaoui lab at UC Berkeley.

Fabian Pedregosa is at Google Research.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].