All Projects → jfc43 → robust-ood-detection

jfc43 / robust-ood-detection

Licence: Apache-2.0 License
Robust Out-of-distribution Detection in Neural Networks

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to robust-ood-detection

well-classified-examples-are-underestimated
Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"
Stars: ✭ 21 (-61.82%)
Mutual labels:  adversarial-attacks, ood-detection
s-attack
[CVPR 2022] S-attack library. Official implementation of two papers "Vehicle trajectory prediction works, but not everywhere" and "Are socially-aware trajectory prediction models really socially-aware?".
Stars: ✭ 51 (-7.27%)
Mutual labels:  adversarial-attacks
hard-label-attack
Natural Language Attacks in a Hard Label Black Box Setting.
Stars: ✭ 26 (-52.73%)
Mutual labels:  adversarial-attacks
adversarial-recommender-systems-survey
The goal of this survey is two-fold: (i) to present recent advances on adversarial machine learning (AML) for the security of RS (i.e., attacking and defense recommendation models), (ii) to show another successful application of AML in generative adversarial networks (GANs) for generative applications, thanks to their ability for learning (high-…
Stars: ✭ 110 (+100%)
Mutual labels:  adversarial-attacks
trojanzoo
TrojanZoo provides a universal pytorch platform to conduct security researches (especially backdoor attacks/defenses) of image classification in deep learning.
Stars: ✭ 178 (+223.64%)
Mutual labels:  adversarial-attacks
gans-in-action
"GAN 인 액션"(한빛미디어, 2020)의 코드 저장소입니다.
Stars: ✭ 29 (-47.27%)
Mutual labels:  adversarial-attacks
flowattack
Attacking Optical Flow (ICCV 2019)
Stars: ✭ 58 (+5.45%)
Mutual labels:  adversarial-attacks
DiagnoseRE
Source code and dataset for the CCKS201 paper "On Robustness and Bias Analysis of BERT-based Relation Extraction"
Stars: ✭ 23 (-58.18%)
Mutual labels:  adversarial-attacks
Attack-ImageNet
No.2 solution of Tianchi ImageNet Adversarial Attack Challenge.
Stars: ✭ 41 (-25.45%)
Mutual labels:  adversarial-attacks
TIGER
Python toolbox to evaluate graph vulnerability and robustness (CIKM 2021)
Stars: ✭ 103 (+87.27%)
Mutual labels:  adversarial-attacks
PGD-pytorch
A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
Stars: ✭ 83 (+50.91%)
Mutual labels:  adversarial-attacks
procedural-advml
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
Stars: ✭ 47 (-14.55%)
Mutual labels:  adversarial-attacks
code-soup
This is a collection of algorithms and approaches used in the book adversarial deep learning
Stars: ✭ 18 (-67.27%)
Mutual labels:  adversarial-attacks
chop
CHOP: An optimization library based on PyTorch, with applications to adversarial examples and structured neural network training.
Stars: ✭ 68 (+23.64%)
Mutual labels:  adversarial-attacks
cool-papers-in-pytorch
Reimplementing cool papers in PyTorch...
Stars: ✭ 21 (-61.82%)
Mutual labels:  adversarial-attacks
KitanaQA
KitanaQA: Adversarial training and data augmentation for neural question-answering models
Stars: ✭ 58 (+5.45%)
Mutual labels:  adversarial-attacks
sparse-rs
Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks
Stars: ✭ 24 (-56.36%)
Mutual labels:  adversarial-attacks
AWP
Codes for NeurIPS 2020 paper "Adversarial Weight Perturbation Helps Robust Generalization"
Stars: ✭ 114 (+107.27%)
Mutual labels:  adversarial-attacks
perceptual-advex
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".
Stars: ✭ 44 (-20%)
Mutual labels:  adversarial-attacks
ijcnn19attacks
Adversarial Attacks on Deep Neural Networks for Time Series Classification
Stars: ✭ 57 (+3.64%)
Mutual labels:  adversarial-attacks

Robust Out-of-distribution Detection in Neural Networks

This project is for the paper: Robust Out-of-distribution Detection in Neural Networks. Some codes are from ODIN, Outlier Exposure and deep Mahalanobis detector.

Preliminaries

It is tested under Ubuntu Linux 16.04.1 and Python 3.6 environment, and requires some packages to be installed:

Downloading in-distribution Dataset

  • CIFAR: included in PyTorch.
  • GTSRB: we provide scripts to download it.

Downloading out-of-distribution Datasets

Overview of the Code

Running Experiments

  • For SVHN dataset, you can run select_svhn_data.py to select test data.
  • For GTSRB dataset, you can run prepare_data.sh to get dataset.
  • robust_ood_train.py: the script used to train different models.
  • eval.py: the script used to evaluate classification accuracy and robustness of models.
  • eval_ood_detection.py: the script used to evaluate OOD detection performance of models.

Example

For CIFAR-10 experiments, you can run the following commands on CIFAR directory to get results.

  • train an ALOE model:

python robust_ood_train.py --name ALOE --adv --ood

  • train an AOE model:

python robust_ood_train.py --name AOE --adv --adv-only-in --ood

  • train an ADV model:

python robust_ood_train.py --name ADV --adv

  • train an OE model:

python robust_ood_train.py --name OE --ood

  • train an Original model:

python robust_ood_train.py --name Original

  • Evaluate classification performance of ALOE model:

python eval.py --name ALOE --adv

  • Evaluate the traditional OOD detection performance of MSP and ODIN using ALOE model:

python eval_ood_detection.py --name ALOE --method msp_and_odin

  • Evaluate the robust OOD detection performance of MSP and ODIN using ALOE model:

python eval_ood_detection.py --name ALOE --method msp_and_odin --adv

  • Evaluate the traditional OOD detection performance of Mahalanobis using Original model:

python eval_ood_detection.py --name Original --method mahalanobis

  • Evaluate the robust OOD detection performance of Mahalanobis using Original model:

python eval_ood_detection.py --name Original --method mahalanobis --adv

Citation

Please cite our work if you use the codebase:

@article{chen2020robust,
  title={Robust Out-of-distribution Detection in Neural Networks},
  author={Chen, Jiefeng and Wu, Xi and Liang, Yingyu and Jha, Somesh and others},
  journal={arXiv preprint arXiv:2003.09711},
  year={2020}
}

License

Please refer to the LICENSE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].