All Projects → JosephKJ → Owod

JosephKJ / Owod

Licence: apache-2.0
(CVPR 2021 Oral) Open World Object Detection

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Owod

Cv paperdaily
CV 论文笔记
Stars: ✭ 555 (+102.55%)
Mutual labels:  object-detection, cvpr
Cvpr2021 Papers With Code
CVPR 2021 论文和开源项目合集
Stars: ✭ 7,138 (+2505.11%)
Mutual labels:  object-detection, cvpr
Open3d Ml
An extension of Open3D to address 3D Machine Learning tasks
Stars: ✭ 284 (+3.65%)
Mutual labels:  object-detection
Finger Detection And Tracking
Finger Detection and Tracking using OpenCV and Python
Stars: ✭ 317 (+15.69%)
Mutual labels:  object-detection
Deep Sort Yolov4
People detection and optional tracking with Tensorflow backend.
Stars: ✭ 306 (+11.68%)
Mutual labels:  object-detection
Jeelizar
JavaScript object detection lightweight library for augmented reality (WebXR demos included). It uses convolutional neural networks running on the GPU with WebGL.
Stars: ✭ 296 (+8.03%)
Mutual labels:  object-detection
Pytorch Vdsr
VDSR (CVPR2016) pytorch implementation
Stars: ✭ 313 (+14.23%)
Mutual labels:  cvpr
Rfcn Tensorflow
RFCN implementation in TensorFlow
Stars: ✭ 294 (+7.3%)
Mutual labels:  object-detection
Prm
Weakly Supervised Instance Segmentation using Class Peak Response, in CVPR 2018 (Spotlight)
Stars: ✭ 322 (+17.52%)
Mutual labels:  cvpr
Alturos.yolo
C# Yolo Darknet Wrapper (real-time object detection)
Stars: ✭ 308 (+12.41%)
Mutual labels:  object-detection
Pytorch Yolo V1
an experiment for yolo-v1, including training and testing.
Stars: ✭ 314 (+14.6%)
Mutual labels:  object-detection
Keras Centernet
A Keras implementation of CenterNet with pre-trained model (unofficial)
Stars: ✭ 307 (+12.04%)
Mutual labels:  object-detection
So Net
SO-Net: Self-Organizing Network for Point Cloud Analysis, CVPR2018
Stars: ✭ 297 (+8.39%)
Mutual labels:  cvpr
Tide
A General Toolbox for Identifying Object Detection Errors
Stars: ✭ 309 (+12.77%)
Mutual labels:  object-detection
Yolov3v4 Modelcompression Multidatasettraining Multibackbone
YOLO ModelCompression MultidatasetTraining
Stars: ✭ 287 (+4.74%)
Mutual labels:  object-detection
Distilling Object Detectors
Implementations of CVPR 2019 paper Distilling Object Detectors with Fine-grained Feature Imitation
Stars: ✭ 317 (+15.69%)
Mutual labels:  object-detection
Fastmot
High-performance multiple object tracking based on YOLO, Deep SORT, and optical flow
Stars: ✭ 284 (+3.65%)
Mutual labels:  object-detection
Yolo V2 Pytorch
YOLO for object detection tasks
Stars: ✭ 302 (+10.22%)
Mutual labels:  object-detection
Vott
Visual Object Tagging Tool: An electron app for building end to end Object Detection Models from Images and Videos.
Stars: ✭ 3,684 (+1244.53%)
Mutual labels:  object-detection
Lightnet
🌓 Bringing pjreddie's DarkNet out of the shadows #yolo
Stars: ✭ 322 (+17.52%)
Mutual labels:  object-detection

Towards Open World Object Detection

Accepted to CVPR 2021 as an ORAL paper

arXiv: https://arxiv.org/abs/2103.02603

The figure shows how our newly formulated Open World Object Detection setting relates to exsiting settings.

Abstract

Humans have a natural instinct to identify unknown object instances in their environments. The intrinsic curiosity about these unknown instances aids in learning about them, when the corresponding knowledge is eventually available. This motivates us to propose a novel computer vision problem called: Open World Object Detection, where a model is tasked to:

  1. Identify objects that have not been introduced to it as `unknown', without explicit supervision to do so, and
  2. Incrementally learn these identified unknown categories without forgetting previously learned classes, when the corresponding labels are progressively received.

We formulate the problem, introduce a strong evaluation protocol and provide a novel solution, which we call ORE: Open World Object Detector, based on contrastive clustering and energy based unknown identification. Our experimental evaluation and ablation studies analyse the efficacy of ORE in achieving Open World objectives. As an interesting by-product, we find that identifying and characterising unknown instances helps to reduce confusion in an incremental object detection setting, where we achieve state-of-the-art performance, with no extra methodological effort. We hope that our work will attract further research into this newly identified, yet crucial research direction.

A sample qualitative result

The sub-figure (a) is the result produced by our method after learning a few set of classes which doesnot include classes like apple and orange. We are able to identify them and correctly labels them as unknown. After some time, when the model is eventually taught to detect apple and orange, these instances are labelled correctly as seen in sub-figure (b); without forgetting how to detect person. An unidentified class instance still remains, and is successfully detects it as unknown.

Installation

See INSTALL.md.

Quick Start

Some bookkeeping needs to be done for the code, like removing the local paths and so on. We will update these shortly.

Data split and trained models: Google Drive

All config files can be found in: configs/OWOD

Sample command on a 4 GPU machine:

python tools/train_net.py --num-gpus 4 --config-file <Change to the appropriate config file> SOLVER.IMS_PER_BATCH 4 SOLVER.BASE_LR 0.005

Acknowledgement

Our code base is build on top of Detectron 2 library.

Citation

If you use our work in your research please cite us:

@inproceedings{joseph2021open,
  title={Towards Open World Object Detection},
  author={K J Joseph and Salman Khan and Fahad Shahbaz Khan and Vineeth N Balasubramanian},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)},
  eprint={2103.02603},
  archivePrefix={arXiv},
  year={2021}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].