All Projects → zhreshold → Mxnet Ssd

zhreshold / Mxnet Ssd

Licence: mit
MXNet port of SSD: Single Shot MultiBox Object Detector. Reimplementation of https://github.com/weiliu89/caffe/tree/ssd

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mxnet Ssd

Kl Loss
Bounding Box Regression with Uncertainty for Accurate Object Detection (CVPR'19)
Stars: ✭ 624 (-18.54%)
Mutual labels:  object-detection
Yolo Tf2
yolo(all versions) implementation in keras and tensorflow 2.4
Stars: ✭ 695 (-9.27%)
Mutual labels:  object-detection
Bmw Tensorflow Training Gui
This repository allows you to get started with a gui based training a State-of-the-art Deep Learning model with little to no configuration needed! NoCode training with TensorFlow has never been so easy.
Stars: ✭ 736 (-3.92%)
Mutual labels:  object-detection
Nudenet
Neural Nets for Nudity Detection and Censoring
Stars: ✭ 642 (-16.19%)
Mutual labels:  object-detection
Preciseroipooling
Precise RoI Pooling with coordinate gradient support, proposed in the paper "Acquisition of Localization Confidence for Accurate Object Detection" (https://arxiv.org/abs/1807.11590).
Stars: ✭ 689 (-10.05%)
Mutual labels:  object-detection
Openlabeling
Label images and video for Computer Vision applications
Stars: ✭ 706 (-7.83%)
Mutual labels:  object-detection
Tf trt models
TensorFlow models accelerated with NVIDIA TensorRT
Stars: ✭ 621 (-18.93%)
Mutual labels:  object-detection
Deepcamera
Open source face recognition on Raspberry Pi. SharpAI is open source stack for machine learning engineering with private deployment and AutoML for edge computing. DeepCamera is application of SharpAI designed for connecting computer vision model to surveillance camera. Developers can run same code on Raspberry Pi/Android/PC/AWS to boost your AI production development.
Stars: ✭ 757 (-1.17%)
Mutual labels:  object-detection
Complex Yolov4 Pytorch
The PyTorch Implementation based on YOLOv4 of the paper: "Complex-YOLO: Real-time 3D Object Detection on Point Clouds"
Stars: ✭ 691 (-9.79%)
Mutual labels:  object-detection
Imageai
A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
Stars: ✭ 6,734 (+779.11%)
Mutual labels:  object-detection
Centermask
CenterMask : Real-Time Anchor-Free Instance Segmentation, in CVPR 2020
Stars: ✭ 646 (-15.67%)
Mutual labels:  object-detection
Freeanchor
FreeAnchor: Learning to Match Anchors for Visual Object Detection (NeurIPS 2019)
Stars: ✭ 660 (-13.84%)
Mutual labels:  object-detection
Tensorflow Face Detection
A mobilenet SSD based face detector, powered by tensorflow object detection api, trained by WIDERFACE dataset.
Stars: ✭ 711 (-7.18%)
Mutual labels:  object-detection
Retinanet Examples
Fast and accurate object detection with end-to-end GPU optimization
Stars: ✭ 631 (-17.62%)
Mutual labels:  object-detection
Getting Things Done With Pytorch
Jupyter Notebook tutorials on solving real-world problems with Machine Learning & Deep Learning using PyTorch. Topics: Face detection with Detectron 2, Time Series anomaly detection with LSTM Autoencoders, Object Detection with YOLO v5, Build your first Neural Network, Time Series forecasting for Coronavirus daily cases, Sentiment Analysis with BERT.
Stars: ✭ 738 (-3.66%)
Mutual labels:  object-detection
Gaussian yolov3
Gaussian YOLOv3: An Accurate and Fast Object Detector Using Localization Uncertainty for Autonomous Driving (ICCV, 2019)
Stars: ✭ 622 (-18.8%)
Mutual labels:  object-detection
Dsod
DSOD: Learning Deeply Supervised Object Detectors from Scratch. In ICCV 2017.
Stars: ✭ 700 (-8.62%)
Mutual labels:  object-detection
Orb slam2 ssd semantic
动态语义SLAM 目标检测+VSLAM+光流/多视角几何动态物体检测+octomap地图+目标数据库
Stars: ✭ 763 (-0.39%)
Mutual labels:  object-detection
Avod
Code for 3D object detection for autonomous driving
Stars: ✭ 757 (-1.17%)
Mutual labels:  object-detection
Opencvtutorials
OpenCV-Python4.1 中文文档
Stars: ✭ 720 (-6.01%)
Mutual labels:  object-detection

SSD: Single Shot MultiBox Object Detector

SSD is an unified framework for object detection with a single network.

You can use the code to train/evaluate/test for object detection task.

Disclaimer

This is a re-implementation of original SSD which is based on caffe. The official repository is available here. The arXiv paper is available here.

This example is intended for reproducing the nice detector while fully utilize the remarkable traits of MXNet.

  • The model is fully compatible with caffe version.
  • Model converter from caffe is available now!
  • The result is almost identical to the original version. However, due to different implementation details, the results might differ slightly.

What's new

  • This repo is now deprecated, I am migrating to the latest Gluon-CV which is more user friendly and has a lot more algorithms in development. This repo will not receive active development, however, you can continue use it with the mxnet 1.1.0(probably 1.2.0).
  • Now this repo is internally synchronized up to data with offical mxnet backend. pip install mxnet will work for this repo as well in most cases.
  • MobileNet pretrained model now provided.
  • Added multiple trained models.
  • Added a much simpler way to compose network from mainstream classification networks (resnet, inception...) and Guide.
  • Update to the latest version according to caffe version, with 5% mAP increase.
  • Use C++ record iterator based on back-end multi-thread engine to achieve huge speed up on multi-gpu environments.
  • Monitor validation mAP during training.
  • More network symbols under development and test.
  • Extra operators are now in mxnet/src/operator/contrib, symbols are modified. Please use Release-v0.2-beta for old models.
  • added Docker support for this repository, prebuilt & including all packages and dependencies. (linux only)
  • added tensorboard support, allowing a more convenient way of research. (linux only)

Demo results

demo1 demo2 demo3

mAP

Model Training data Test data mAP Note
VGG16_reduced 300x300 VOC07+12 trainval VOC07 test 77.8 fast
VGG16_reduced 512x512 VOC07+12 trainval VOC07 test 79.9 slow
Inception-v3 512x512 VOC07+12 trainval VOC07 test 78.9 fastest
Resnet-50 512x512 VOC07+12 trainval VOC07 test 79.1 fast
MobileNet 512x512 VOC07+12 trainval VOC07 test 72.5 super fast
MobileNet 608x608 VOC07+12 trainval VOC07 test 74.7 super fast

More to be added

Speed

Model GPU CUDNN Batch-size FPS*
VGG16_reduced 300x300 TITAN X(Maxwell) v5.1 16 95
VGG16_reduced 300x300 TITAN X(Maxwell) v5.1 8 95
VGG16_reduced 300x300 TITAN X(Maxwell) v5.1 1 64
VGG16_reduced 300x300 TITAN X(Maxwell) N/A 8 36
VGG16_reduced 300x300 TITAN X(Maxwell) N/A 1 28

Forward time only, data loading and drawing excluded.

Getting started

  • Option #1 - install using 'Docker'. if you are not familiar with this technology, there is a 'Docker' section below. you can get the latest image:
docker pull daviddocker78/mxnet-ssd:gpu_0.12.0_cuda9
  • You will need python modules: cv2, matplotlib and numpy. If you use mxnet-python api, you probably have already got them. You can install them via pip or package manegers, such as apt-get:
sudo apt-get install python-opencv python-matplotlib python-numpy
  • Clone this repo:
# if you don't have git, install it via apt or homebrew/yum based on your system
sudo apt-get install git
# cd where you would like to clone this repo
cd ~
git clone --recursive https://github.com/zhreshold/mxnet-ssd.git
# make sure you clone this with --recursive
# if not done correctly or you are using downloaded repo, pull them all via:
# git submodule update --recursive --init
cd mxnet-ssd/mxnet
  • (Skip this step if you have offcial MXNet installed.) Build MXNet: cd /path/to/mxnet-ssd/mxnet. Follow the official instructions here.
# for Ubuntu/Debian
cp make/config.mk ./config.mk
# modify it if necessary

Remember to enable CUDA if you want to be able to train, since CPU training is insanely slow. Using CUDNN is optional, but highly recommanded.

Try the demo

# cd /path/to/mxnet-ssd
python demo.py --gpu 0
# play with examples:
python demo.py --epoch 0 --images ./data/demo/dog.jpg --thresh 0.5
python demo.py --cpu --network resnet50 --data-shape 512
# wait for library to load for the first time
  • Check python demo.py --help for more options.

Train the model

This example only covers training on Pascal VOC dataset. Other datasets should be easily supported by adding subclass derived from class Imdb in dataset/imdb.py. See example of dataset/pascal_voc.py for details.

  • Download the converted pretrained vgg16_reduced model here, unzip .param and .json files into model/ directory by default.
  • Download the PASCAL VOC dataset, skip this step if you already have one.
cd /path/to/where_you_store_datasets/
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
# Extract the data.
tar -xvf VOCtrainval_11-May-2012.tar
tar -xvf VOCtrainval_06-Nov-2007.tar
tar -xvf VOCtest_06-Nov-2007.tar
  • We are going to use trainval set in VOC2007/2012 as a common strategy. The suggested directory structure is to store VOC2007 and VOC2012 directories in the same VOCdevkit folder.
  • Then link VOCdevkit folder to data/VOCdevkit by default:
ln -s /path/to/VOCdevkit /path/to/this_example/data/VOCdevkit

Use hard link instead of copy could save us a bit disk space.

  • Create packed binary file for faster training:
# cd /path/to/mxnet-ssd
bash tools/prepare_pascal.sh
# or if you are using windows
python tools/prepare_dataset.py --dataset pascal --year 2007,2012 --set trainval --target ./data/train.lst
python tools/prepare_dataset.py --dataset pascal --year 2007 --set test --target ./data/val.lst --shuffle False
  • Start training:
python train.py
  • By default, this example will use batch-size=32 and learning_rate=0.004. You might need to change the parameters a bit if you have different configurations. Check python train.py --help for more training options. For example, if you have 4 GPUs, use:
# note that a perfect training parameter set is yet to be discovered for multi-gpu
python train.py --gpus 0,1,2,3 --batch-size 128 --lr 0.001
  • Memory usage: MXNet is very memory efficient, training on VGG16_reduced model with batch-size 32 takes around 4684MB without CUDNN(conv1_x and conv2_x fixed).

Evalute trained model

Use:

# cd /path/to/mxnet-ssd
python evaluate.py --gpus 0,1 --batch-size 128 --epoch 0

Convert model to deploy mode

This simply removes all loss layers, and attach a layer for merging results and non-maximum suppression. Useful when loading python symbol is not available.

# cd /path/to/mxnet-ssd
python deploy.py --num-class 20
# then you can run demo with new model without loading python symbol
python demo.py --prefix model/ssd_300_deploy --epoch 0 --deploy

Convert caffemodel

Converter from caffe is available at /path/to/mxnet-ssd/tools/caffe_converter

This is specifically modified to handle custom layer in caffe-ssd. Usage:

cd /path/to/mxnet-ssd/tools/caffe_converter
make
python convert_model.py deploy.prototxt name_of_pretrained_caffe_model.caffemodel ssd_converted
# you will use this model in deploy mode without loading from python symbol
python demo.py --prefix ssd_converted --epoch 1 --deploy

There is no guarantee that conversion will always work, but at least it's good for now.

Legacy models

Since the new interface for composing network is introduced, the old models have inconsistent names for weights. You can still load the previous model by rename the symbol to legacy_xxx.py and call with python train/demo.py --network legacy_xxx For example:

python demo.py --network 'legacy_vgg16_ssd_300.py' --prefix model/ssd_300 --epoch 0

Docker

First make sure docker is installed. The docker plugin nvidia-docker is required to run on Nvidia GPUs.

docker pull daviddocker78/mxnet-ssd:gpu_0.12.0_cuda9

Otherwise, if you wish to build it yourself, you have the Dockerfiles available in this repo, under the 'docker' folder.

  • to run a container instance:
nvidia-docker run -it --rm myImageName:tag

now you can execute commands the same way as you would, if you'd install mxnet on your own computer. for more information, see the Guide.

Tensorboard

  • There has been some great effort to bring tensorboard to mxnet. If you chose to work with dockers, you have it installed in the pre-built image you've downloaded. otherwise, follow here for installation steps.
  • To save training loss graphs, validation AP per class, and validation ROC graphs to tensorboard while training, you can specify:
python train.py --gpus 0,1,2,3 --batch-size 128 --lr 0.001 --tensorboard True
  • To save also the distributions of layers (actually, the variance of them), you can specify:
python train.py --gpus 0,1,2,3 --batch-size 128 --lr 0.001 --tensorboard True --monitor 40
  • Visualization with Docker: the UI of tensorboard has changed over time. to get the best experience, download the new tensorflow docker-image:
# download the built image from Dockerhub
docker pull tensorflow/tensorflow:1.4.0-devel-gpu
# run a container and open a port using '-p' flag. 
# attach a volume from where you stored your logs, to a directory inside the container
nvidia-docker run -it --rm -p 0.0.0.0:6006:6006 -v /my/full/experiment/path:/res tensorflow/tensorflow:1.4.0-devel-gpu
cd /res
tensorboard --logdir=.

To launch tensorboard without docker, simply run the last command Now tensorboard is loading the tensorEvents of your experiment. open your browser under '0.0.0.0:6006' and you will have tensorboard!

Tensorboard visualizations

loss AP ROC

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].