All Projects → implus → Gfocal

implus / Gfocal

Licence: apache-2.0
Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection, NeurIPS2020

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Gfocal

Kaggle Rsna
Deep Learning for Automatic Pneumonia Detection, RSNA challenge
Stars: ✭ 74 (-80.32%)
Mutual labels:  object-detection, classification, detection
Tensorflow object tracking video
Object Tracking in Tensorflow ( Localization Detection Classification ) developed to partecipate to ImageNET VID competition
Stars: ✭ 491 (+30.59%)
Mutual labels:  object-detection, classification, detection
Rectlabel Support
RectLabel - An image annotation tool to label images for bounding box object detection and segmentation.
Stars: ✭ 338 (-10.11%)
Mutual labels:  object-detection, detection
volkscv
A Python toolbox for computer vision research and project
Stars: ✭ 58 (-84.57%)
Mutual labels:  detection, classification
Franc
Natural language detection
Stars: ✭ 3,605 (+858.78%)
Mutual labels:  classification, detection
Real time object detection and tracking
YOLOv2 and MobileNet_SSD detection algorithms used along with KCF object tracker
Stars: ✭ 241 (-35.9%)
Mutual labels:  object-detection, detection
mmrazor
OpenMMLab Model Compression Toolbox and Benchmark.
Stars: ✭ 644 (+71.28%)
Mutual labels:  detection, classification
covid-mask-detector
Detect whether a person is wearing a mask or not
Stars: ✭ 102 (-72.87%)
Mutual labels:  detection, classification
Pine
🌲 Aimbot powered by real-time object detection with neural networks, GPU accelerated with Nvidia. Optimized for use with CS:GO.
Stars: ✭ 202 (-46.28%)
Mutual labels:  object-detection, detection
Bmw Tensorflow Inference Api Gpu
This is a repository for an object detection inference API using the Tensorflow framework.
Stars: ✭ 277 (-26.33%)
Mutual labels:  object-detection, inference
Mmdetection To Tensorrt
convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
Stars: ✭ 262 (-30.32%)
Mutual labels:  object-detection, inference
Gfocalv2
Generalized Focal Loss V2: Learning Reliable Localization Quality Estimation for Dense Object Detection, CVPR2021
Stars: ✭ 270 (-28.19%)
Mutual labels:  object-detection, detection
Caffe2 Ios
Caffe2 on iOS Real-time Demo. Test with Your Own Model and Photos.
Stars: ✭ 221 (-41.22%)
Mutual labels:  object-detection, classification
Nncf
PyTorch*-based Neural Network Compression Framework for enhanced OpenVINO™ inference
Stars: ✭ 218 (-42.02%)
Mutual labels:  object-detection, classification
Foveabox
FoveaBox: Beyond Anchor-based Object Detector
Stars: ✭ 353 (-6.12%)
Mutual labels:  object-detection, detection
Com.unity.perception
Perception toolkit for sim2real training and validation
Stars: ✭ 208 (-44.68%)
Mutual labels:  object-detection, detection
mri-deep-learning-tools
Resurces for MRI images processing and deep learning in 3D
Stars: ✭ 56 (-85.11%)
Mutual labels:  detection, classification
Vott
Visual Object Tagging Tool: An electron app for building end to end Object Detection Models from Images and Videos.
Stars: ✭ 3,684 (+879.79%)
Mutual labels:  object-detection, detection
Object Detection Api
Yolov3 Object Detection implemented as APIs, using TensorFlow and Flask
Stars: ✭ 177 (-52.93%)
Mutual labels:  object-detection, inference
Bmw Yolov4 Inference Api Cpu
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
Stars: ✭ 180 (-52.13%)
Mutual labels:  object-detection, inference

Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection

See more comments in 大白话 Generalized Focal Loss(知乎)

[2020.11] GFocal has been adopted in NanoDet, a super efficient object detector on mobile devices, achieving same performance but 2x faster than YoLoV4-Tiny! More details are in YOLO之外的另一选择,手机端97FPS的Anchor-Free目标检测模型NanoDet现已开源~.

[2020.10] Good News! GFocal has been accepted in NeurIPs 2020 and GFocal V2 is on the way.

[2020.9] The winner (1st) of GigaVision (object detection and tracking) in ECCV 2020 workshop from DeepBlueAI team adopt GFocal in their solutions.

[2020.7] GFocal is officially included in MMDetection V2, many thanks to @ZwwWayne and @hellock for helping migrating the code.

Introduction

One-stage detector basically formulates object detection as dense classification and localization (i.e., bounding box regression). The classification is usually optimized by Focal Loss and the box location is commonly learned under Dirac delta distribution. A recent trend for one-stage detectors is to introduce an \emph{individual} prediction branch to estimate the quality of localization, where the predicted quality facilitates the classification to improve detection performance. This paper delves into the \emph{representations} of the above three fundamental elements: quality estimation, classification and localization. Two problems are discovered in existing practices, including (1) the inconsistent usage of the quality estimation and classification between training and inference (i.e., separately trained but compositely used in test) and (2) the inflexible Dirac delta distribution for localization when there is ambiguity and uncertainty which is often the case in complex scenes. To address the problems, we design new representations for these elements. Specifically, we merge the quality estimation into the class prediction vector to form a joint representation of localization quality and classification, and use a vector to represent arbitrary distribution of box locations. The improved representations eliminate the inconsistency risk and accurately depict the flexible distribution in real data, but contain \emph{continuous} labels, which is beyond the scope of Focal Loss. We then propose Generalized Focal Loss (GFL) that generalizes Focal Loss from its discrete form to the \emph{continuous} version for successful optimization. On COCO {\tt test-dev}, GFL achieves 45.0% AP using ResNet-101 backbone, surpassing state-of-the-art SAPD (43.5%) and ATSS (43.6%) with higher or comparable inference speed, under the same backbone and training settings. Notably, our best model can achieve a single-model single-scale AP of 48.2%, at 10 FPS on a single 2080Ti GPU.

For details see GFocal. The speed-accuracy trade-off is as follows:

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Get Started

Please see GETTING_STARTED.md for the basic usage of MMDetection.

Train

# assume that you are under the root directory of this project,
# and you have activated your virtual environment if needed.
# and with COCO dataset in 'data/coco/'

./tools/dist_train.sh configs/gfl_r50_1x.py 8 --validate

Inference

./tools/dist_test.sh configs/gfl_r50_1x.py work_dirs/gfl_r50_1x/epoch_12.pth 8 --eval bbox

Speed Test (FPS)

CUDA_VISIBLE_DEVICES=0 python3 ./tools/benchmark.py configs/gfl_r50_1x.py work_dirs/gfl_r50_1x/epoch_12.pth

Models

For your convenience, we provide the following trained models. All models are trained with 16 images in a mini-batch with 8 GPUs.

Model Multi-scale training AP (minival) AP (test-dev) FPS Link
GFL_R_50_FPN_1x No 40.2 40.3 19.4 Google
GFL_R_50_FPN_2x Yes 42.8 43.1 19.4 Google
GFL_R_101_FPN_2x Yes 44.9 45.0 14.6 Google
GFL_dcnv2_R_101_FPN_2x Yes 47.2 47.3 12.7 Google
GFL_X_101_32x4d_FPN_2x Yes 45.7 46.0 12.2 Google
GFL_dcnv2_X_101_32x4d_FPN_2x Yes 48.3 48.2 10.0 Google

[1] 1x and 2x mean the model is trained for 90K and 180K iterations, respectively.
[2] All results are obtained with a single model and without any test time data augmentation such as multi-scale, flipping and etc..
[3] dcnv2 denotes deformable convolutional networks v2. Note that for ResNe(X)t based models, we apply deformable convolutions from stage c3 to c5 in backbones.
[4] Refer to more details in config files in config/.
[5] FPS is tested with a single GeForce RTX 2080Ti GPU, using a batch size of 1.

Acknowledgement

Thanks MMDetection team for the wonderful open source project!

Citation

If you find GFL useful in your research, please consider citing this project.

@inproceedings{li2020generalized,
  title={Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection},
  author={Li, Xiang and Wang, Wenhai and Wu, Lijun and Chen, Shuo and Hu, Xiaolin and Li, Jun and Tang, Jinhui and Yang, Jian},
  booktitle={NeurIPS},
  year={2020}
}
@article{li2020generalizedv2,
  title={Generalized Focal Loss V2: Learning Reliable Localization Quality Estimation for Dense Object Detection},
  author={Li, Xiang and Wang, Wenhai and Hu, Xiaolin and Li, Jun and Tang, Jinhui and Yang, Jian},
  journal={arXiv preprint},
  year={2020}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].