evaluating-adversarial-robustness / Adv Eval Paper

LaTeX source for the paper "On Evaluating Adversarial Robustness"

Labels

On Evaluating Adversarial Robustness

This repository contains the LaTeX source for the paper On Evaluating Adversarial Robustness. It is a paper written with the intention of helping everyone---from those designing their own neural networks, to those reviewing defense papers, to those just wondering what goes into a defense evaluation---learn more about methods for evaluating adversarial robustness.

This is a Living Document

We do not intend for this to be a traditional paper where it is written once and never updated. While the fundamentals for how to evaluate adversarial robustness will not change, most of the specific advice we give today on evaluating adversarial robustness may quickly become out of date. We therefore expect to update this document from time to time in order to match the currently accepted best practices in the research community.

Abstract

Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of recent work attempting to design defenses that withstand adaptive attacks, few have succeeded; most papers that propose defenses are quickly shown to be incorrect.

We believe a large contributing factor is the difficulty of performing security evaluations. In this paper, we discuss the methodological foundations, review commonly accepted best practices, and suggest new methods for evaluating defenses to adversarial examples. We hope that both researchers developing defenses as well as readers and reviewers who wish to understand the completeness of an evaluation consider our advice in order to avoid common pitfalls.

Contributing

We welcome any contributions to the paper through both issues and pull requests. Please prefer issues for topics which warrant initial discussion (such as suggesting a new item to be added to the checklist) and pull requests for changes that will require less discussion (fixing typos or writing content for a topic discussed previously in an issue).

Contributors

  • Nicholas Carlini (Google Brain)
  • Anish Athalye (MIT)
  • Nicolas Papernot (Google Brain)
  • Wieland Brendel (University of Tubingen)
  • Jonas Rauber (University of Tubingen)
  • Dimitris Tsipras (MIT)
  • Ian Goodfellow (Google Brain)
  • Aleksander Madry (MIT)
  • Alexey Kurakin (Google Brain)

NOTE: contributors are ordered according to the amount of their contribution to the text of the paper, similar to the Cleverhans tech report. List of contributors may be expanded and order may change with the new revisions of the paper.

Changelog

2018-02-20: Explain author order (#5)

2018-02-18: Initial Revision

Citation

If you use this paper in academic research, you may cite the following:

@article{carlini2019evaluating,
  title={On Evaluating Adversarial Robustness},
  author={Carlini, Nicholas and Athalye, Anish and Papernot, Nicolas and Brendel, Wieland and Rauber, Jonas and Tsipras, Dimitris and Goodfellow, Ian and Madry, Aleksander and Kurakin, Alexey},
  journal={arXiv preprint arXiv:1902.06705},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].