All Projects → bethgelab → Robust Detection Benchmark

bethgelab / Robust Detection Benchmark

Licence: mit
Code, data and benchmark from the paper "Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming" (NeurIPS 2019 ML4AD)

Projects that are alternatives of or similar to Robust Detection Benchmark

Edgenets
This repository contains the source code of our work on designing efficient CNNs for computer vision
Stars: ✭ 331 (+158.59%)
Mutual labels:  object-detection, cityscapes, pascal-voc
Lacmus
Lacmus is a cross-platform application that helps to find people who are lost in the forest using computer vision and neural networks.
Stars: ✭ 142 (+10.94%)
Mutual labels:  object-detection, jupyter-notebook, pascal-voc
Voc2coco
How to create custom COCO data set for object detection
Stars: ✭ 140 (+9.38%)
Mutual labels:  object-detection, jupyter-notebook, pascal-voc
Image bbox slicer
This easy-to-use library splits images and its bounding box annotations into tiles, both into specific sizes and into any arbitrary number of equal parts. It can also resize them, both by specific sizes and by a resizing/scaling factor.
Stars: ✭ 41 (-67.97%)
Mutual labels:  object-detection, jupyter-notebook, pascal-voc
Fcos tensorflow
FCOS: Fully Convolutional One-Stage Object Detection.
Stars: ✭ 87 (-32.03%)
Mutual labels:  object-detection, jupyter-notebook
Chainer Pspnet
PSPNet in Chainer
Stars: ✭ 76 (-40.62%)
Mutual labels:  cityscapes, pascal-voc
Text Detection Using Yolo Algorithm In Keras Tensorflow
Implemented the YOLO algorithm for scene text detection in keras-tensorflow (No object detection API used) The code can be tweaked to train for a different object detection task using YOLO.
Stars: ✭ 87 (-32.03%)
Mutual labels:  object-detection, jupyter-notebook
Review object detection metrics
Review on Object Detection Metrics: 14 object detection metrics including COCO's and PASCAL's metrics. Supporting different bounding box formats.
Stars: ✭ 100 (-21.87%)
Mutual labels:  object-detection, pascal-voc
Ssd keras
Port of Single Shot MultiBox Detector to Keras
Stars: ✭ 1,101 (+760.16%)
Mutual labels:  object-detection, jupyter-notebook
Driving In The Matrix
Steps to reproduce training results for the paper Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?
Stars: ✭ 96 (-25%)
Mutual labels:  object-detection, autonomous-driving
Airbnb Amenity Detection
Repo for 42 days project to replicate/improve Airbnb's amenity (object) detection pipeline.
Stars: ✭ 101 (-21.09%)
Mutual labels:  object-detection, jupyter-notebook
Tracktor
Python and OpenCV based object tracking software
Stars: ✭ 76 (-40.62%)
Mutual labels:  object-detection, jupyter-notebook
Autonomous driving
Ros package for basic autonomous lane tracking and object detection
Stars: ✭ 67 (-47.66%)
Mutual labels:  object-detection, autonomous-driving
Novel Deep Learning Model For Traffic Sign Detection Using Capsule Networks
capsule networks that achieves outstanding performance on the German traffic sign dataset
Stars: ✭ 88 (-31.25%)
Mutual labels:  autonomous-driving, jupyter-notebook
Fish detection
Fish detection using Open Images Dataset and Tensorflow Object Detection
Stars: ✭ 67 (-47.66%)
Mutual labels:  object-detection, jupyter-notebook
Soccer Ball Detection Yolov2
YOLOv2 trained against custom dataset
Stars: ✭ 97 (-24.22%)
Mutual labels:  object-detection, jupyter-notebook
Kerasobjectdetector
Keras Object Detection API with YOLK project 🍳
Stars: ✭ 113 (-11.72%)
Mutual labels:  object-detection, jupyter-notebook
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+991.41%)
Mutual labels:  object-detection, jupyter-notebook
Colab Mask Rcnn
How to run Object Detection and Segmentation on a Video Fast for Free
Stars: ✭ 114 (-10.94%)
Mutual labels:  object-detection, jupyter-notebook
Objectdetection
Some experiments with object detection in PyTorch
Stars: ✭ 117 (-8.59%)
Mutual labels:  object-detection, jupyter-notebook

Robust Detection Benchmark

This repository contains code, data and a benchmark leaderboard from the paper "Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming" by Claudio Michaelis*, Benjamin Mitzkus*, Robert Geirhos*, Evgenia Rusak*, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge & Wieland Brendel.

The core idea is shown here: Real-world applications need to be able to cope with adverse outdoor hazards such as fog, frost, snow (and the occasional dragonfire). The paper benchmarks object detection models on their corruption resilience across a broad range of corruption types.

traffic hazards

Structure & Overview

This repository serves two purposes:

  1. Enabling reproducibility. All result figures from the directory figures/ can be generated by executing the analysis notebook in data-analysis/ which uses the data from raw-data/.

  2. Hosting the Robust Detection Benchmark (more information below).

Additionally, we provide three separate modules with functionality that we use in the paper and that we hope may be useful for your own research or applications. They are listed here:

Stylize arbitrary datasets: https://github.com/bethgelab/stylize-datasets

Corrupt arbitrary datasets: https://github.com/bethgelab/imagecorruptions

Object detection: https://github.com/bethgelab/mmdetection

Robust Detection Benchmark

This section shows the most important results on our three benchmark datasets: COCO-C, Pascal-C and Cityscapes-C. All models have a fixed ResNet 50 backbone to put the focus on improvements in detection robustness. For more results including ones with different backbones and instance segmentation results please have a look at the comprehensive results table.

Results are ranked by their mean performance under corruption (named mCE in the paper). If you achieve state-of-the-art robustness on any of the three datasets with your approach, please open a pull request where you add the results in the table below. We strongly encourage to use backbone listed in the table below, otherwise robustness gains cannot be disentangled from improved overall performance. In your pull request, you will need to indicate the three metrics P, rPC and mPC (as defined in the paper); mPC will then be used to rank your results.

Evaluation details

Pascal VOC: Results are evaluated on Pascal VOC 2007 test using the AP50 metric.
COCO: Results are evaluated on COCO 2017 val using the mAP50 metric.
Cityscapes: Results are evaluated on Cityscapes val using the mAP50 metric.

Leaderboard

Pascal-C

Rank Method Reference Model Backbone clean P [AP50] corrupted mPC [AP50] relative rPC [%]
1 stylizing training data Michaelis et al. 2019 Faster R-CNN R-50-FPN 80.4 56.2 69.9
- baseline Michaelis et al. 2019 Faster R-CNN R-50-FPN 80.5 48.6 60.4

COCO-C

Rank Method Reference Model Backbone clean P [AP] corrupted mPC [AP] relative rPC [%]
1 stylizing training data Michaelis et al. 2019 Faster R-CNN R-50-FPN 34.6 20.4 58.9
- baseline Michaelis et al. 2019 Faster R-CNN R-50-FPN 36.3 18.2 50.2

Cityscapes-C

Rank Method Reference Model Backbone clean P [AP] corrupted mPC [AP] relative rPC [%]
1 stylizing training data Michaelis et al. 2019 Faster R-CNN R-50-FPN 36.3 17.2 47.4
- baseline Michaelis et al. 2019 Faster R-CNN R-50-FPN 36.4 12.2 33.4

Citation

If you use our code or the benchmark, please consider citing:

@article{michaelis2019dragon,
  title={Benchmarking Robustness in Object Detection: 
    Autonomous Driving when Winter is Coming},
  author={Michaelis, Claudio and Mitzkus, Benjamin and 
    Geirhos, Robert and Rusak, Evgenia and 
    Bringmann, Oliver and Ecker, Alexander S. and 
    Bethge, Matthias and Brendel, Wieland},
  journal={arXiv preprint arXiv:1907.07484},
  year={2019}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].