All Projects → lufficc → Ssd

lufficc / Ssd

Licence: mit
High quality, fast, modular reference implementation of SSD in PyTorch

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ssd

Ssd Knowledge Distillation
A PyTorch Implementation of Knowledge Distillation on SSD
Stars: ✭ 51 (-95.19%)
Mutual labels:  object-detection, ssd
Pytorch Ssd
MobileNetV1, MobileNetV2, VGG based SSD/SSD-lite implementation in Pytorch 1.0 / Pytorch 0.4. Out-of-box support for retraining on Open Images dataset. ONNX and Caffe2 support. Experiment Ideas like CoordConv.
Stars: ✭ 1,054 (-0.57%)
Mutual labels:  object-detection, ssd
Ssd Variants
PyTorch implementation of several SSD based object detection algorithms.
Stars: ✭ 233 (-78.02%)
Mutual labels:  object-detection, ssd
Traffic Sign Detection
Traffic Sign Detection. Code for the paper entitled "Evaluation of deep neural networks for traffic sign detection systems".
Stars: ✭ 200 (-81.13%)
Mutual labels:  object-detection, ssd
Ssd Tensorflow
Single Shot MultiBox Detector in TensorFlow
Stars: ✭ 4,066 (+283.58%)
Mutual labels:  object-detection, ssd
Paddledetection
Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Stars: ✭ 5,799 (+447.08%)
Mutual labels:  object-detection, ssd
Object Detection Api Tensorflow
Object Detection API Tensorflow
Stars: ✭ 267 (-74.81%)
Mutual labels:  object-detection, ssd
Ssd keras
A Keras port of Single Shot MultiBox Detector
Stars: ✭ 1,763 (+66.32%)
Mutual labels:  object-detection, ssd
Rectlabel Support
RectLabel - An image annotation tool to label images for bounding box object detection and segmentation.
Stars: ✭ 338 (-68.11%)
Mutual labels:  object-detection, ssd
Fastmot
High-performance multiple object tracking based on YOLO, Deep SORT, and optical flow
Stars: ✭ 284 (-73.21%)
Mutual labels:  object-detection, ssd
Vip
Video Platform for Action Recognition and Object Detection in Pytorch
Stars: ✭ 175 (-83.49%)
Mutual labels:  object-detection, ssd
Ssd tensorflow traffic sign detection
Implementation of Single Shot MultiBox Detector in TensorFlow, to detect and classify traffic signs
Stars: ✭ 459 (-56.7%)
Mutual labels:  object-detection, ssd
A Pytorch Tutorial To Object Detection
SSD: Single Shot MultiBox Detector | a PyTorch Tutorial to Object Detection
Stars: ✭ 2,398 (+126.23%)
Mutual labels:  object-detection, ssd
Mmdetection
OpenMMLab Detection Toolbox and Benchmark
Stars: ✭ 17,646 (+1564.72%)
Mutual labels:  object-detection, ssd
Ssd keras
简明 SSD 目标检测模型 keras version(交通标志识别 训练部分见 dev 分支)
Stars: ✭ 152 (-85.66%)
Mutual labels:  object-detection, ssd
Mmdetection To Tensorrt
convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
Stars: ✭ 262 (-75.28%)
Mutual labels:  object-detection, ssd
Tf Object Detection
Simpler app for tensorflow object detection API
Stars: ✭ 91 (-91.42%)
Mutual labels:  object-detection, ssd
Ssd Pytorch
SSD: Single Shot MultiBox Detector pytorch implementation focusing on simplicity
Stars: ✭ 107 (-89.91%)
Mutual labels:  object-detection, ssd
Ssd Pytorch
SSD目标检测算法(Single Shot MultiBox Detector)(简单,明了,易用,全中文注释,单机多卡训练,视频检测)( If you train the model on a single computer and mutil GPU, this program will be your best choice , easier to use and easier to understand )
Stars: ✭ 276 (-73.96%)
Mutual labels:  object-detection, ssd
Ssd.pytorch
A PyTorch Implementation of Single Shot MultiBox Detector
Stars: ✭ 4,499 (+324.43%)
Mutual labels:  object-detection, ssd

High quality, fast, modular reference implementation of SSD in PyTorch 1.0

This repository implements SSD (Single Shot MultiBox Detector). The implementation is heavily influenced by the projects ssd.pytorch, pytorch-ssd and maskrcnn-benchmark. This repository aims to be the code base for researches based on SSD.

Example SSD output (vgg_ssd300_voc0712).

Losses Learning rate Metrics
losses lr metric

Highlights

  • PyTorch 1.0: Support PyTorch 1.0 or higher.
  • Multi-GPU training and inference: We use DistributedDataParallel, you can train or test with arbitrary GPU(s), the training schema will change accordingly.
  • Modular: Add your own modules without pain. We abstract backbone,Detector, BoxHead, BoxPredictor, etc. You can replace every component with your own code without change the code base. For example, You can add EfficientNet as backbone, just add efficient_net.py (ALREADY ADDED) and register it, specific it in the config file, It's done!
  • CPU support for inference: runs on CPU in inference time.
  • Smooth and enjoyable training procedure: we save the state of model, optimizer, scheduler, training iter, you can stop your training and resume training exactly from the save point without change your training CMD.
  • Batched inference: can perform inference using multiple images per batch per GPU.
  • Evaluating during training: eval you model every eval_step to check performance improving or not.
  • Metrics Visualization: visualize metrics details in tensorboard, like AP, APl, APm and APs for COCO dataset or mAP and 20 categories' AP for VOC dataset.
  • Auto download: load pre-trained weights from URL and cache it.

Installation

Requirements

  1. Python3
  2. PyTorch 1.0 or higher
  3. yacs
  4. Vizer
  5. GCC >= 4.9
  6. OpenCV

Step-by-step installation

git clone https://github.com/lufficc/SSD.git
cd SSD
# Required packages: torch torchvision yacs tqdm opencv-python vizer
pip install -r requirements.txt

# Done! That's ALL! No BUILD! No bothering SETUP!

# It's recommended to install the latest release of torch and torchvision.

Train

Setting Up Datasets

Pascal VOC

For Pascal VOC dataset, make the folder structure like this:

VOC_ROOT
|__ VOC2007
    |_ JPEGImages
    |_ Annotations
    |_ ImageSets
    |_ SegmentationClass
|__ VOC2012
    |_ JPEGImages
    |_ Annotations
    |_ ImageSets
    |_ SegmentationClass
|__ ...

Where VOC_ROOT default is datasets folder in current project, you can create symlinks to datasets or export VOC_ROOT="/path/to/voc_root".

COCO

For COCO dataset, make the folder structure like this:

COCO_ROOT
|__ annotations
    |_ instances_valminusminival2014.json
    |_ instances_minival2014.json
    |_ instances_train2014.json
    |_ instances_val2014.json
    |_ ...
|__ train2014
    |_ <im-1-name>.jpg
    |_ ...
    |_ <im-N-name>.jpg
|__ val2014
    |_ <im-1-name>.jpg
    |_ ...
    |_ <im-N-name>.jpg
|__ ...

Where COCO_ROOT default is datasets folder in current project, you can create symlinks to datasets or export COCO_ROOT="/path/to/coco_root".

Single GPU training

# for example, train SSD300:
python train.py --config-file configs/vgg_ssd300_voc0712.yaml

Multi-GPU training

# for example, train SSD300 with 4 GPUs:
export NGPUS=4
python -m torch.distributed.launch --nproc_per_node=$NGPUS train.py --config-file configs/vgg_ssd300_voc0712.yaml SOLVER.WARMUP_FACTOR 0.03333 SOLVER.WARMUP_ITERS 1000

The configuration files that I provide assume that we are running on single GPU. When changing number of GPUs, hyper-parameter (lr, max_iter, ...) will also changed according to this paper: Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour.

Evaluate

Single GPU evaluating

# for example, evaluate SSD300:
python test.py --config-file configs/vgg_ssd300_voc0712.yaml

Multi-GPU evaluating

# for example, evaluate SSD300 with 4 GPUs:
export NGPUS=4
python -m torch.distributed.launch --nproc_per_node=$NGPUS test.py --config-file configs/vgg_ssd300_voc0712.yaml

Demo

Predicting image in a folder is simple:

python demo.py --config-file configs/vgg_ssd300_voc0712.yaml --images_dir demo --ckpt https://github.com/lufficc/SSD/releases/download/1.2/vgg_ssd300_voc0712.pth

Then it will download and cache vgg_ssd300_voc0712.pth automatically and predicted images with boxes, scores and label names will saved to demo/result folder by default.

You will see a similar output:

(0001/0005) 004101.jpg: objects 01 | load 010ms | inference 033ms | FPS 31
(0002/0005) 003123.jpg: objects 05 | load 009ms | inference 019ms | FPS 53
(0003/0005) 000342.jpg: objects 02 | load 009ms | inference 019ms | FPS 51
(0004/0005) 008591.jpg: objects 02 | load 008ms | inference 020ms | FPS 50
(0005/0005) 000542.jpg: objects 01 | load 011ms | inference 019ms | FPS 53

MODEL ZOO

Origin Paper:

VOC2007 test coco test-dev2015
SSD300* 77.2 25.1
SSD512* 79.8 28.8

COCO:

Backbone Input Size box AP Model Size Download
VGG16 300 25.2 262MB model
VGG16 512 29.0 275MB model

PASCAL VOC:

Backbone Input Size mAP Model Size Download
VGG16 300 77.7 201MB model
VGG16 512 80.7 207MB model
Mobilenet V2 320 68.9 25.5MB model
Mobilenet V3 320 69.5 29.9MB model
EfficientNet-B3 300 73.9 97.1MB model

Develop Guide

If you want to add your custom components, please see DEVELOP_GUIDE.md for more details.

Troubleshooting

If you have issues running or compiling this code, we have compiled a list of common issues in TROUBLESHOOTING.md. If your issue is not present there, please feel free to open a new issue.

Citations

If you use this project in your research, please cite this project.

@misc{lufficc2018ssd,
    author = {Congcong Li},
    title = {{High quality, fast, modular reference implementation of SSD in PyTorch}},
    year = {2018},
    howpublished = {\url{https://github.com/lufficc/SSD}}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].