All Projects → choasup → Sin

choasup / Sin

CVPR 2018: Structure Inference Net for Object Detection

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Sin

A Pytorch Tutorial To Object Detection
SSD: Single Shot MultiBox Detector | a PyTorch Tutorial to Object Detection
Stars: ✭ 2,398 (+1247.19%)
Mutual labels:  object-detection
Object Detection
Object detection with ssd_mobilenet and tiny-yolo (Add: YOLOv3, tflite)
Stars: ✭ 173 (-2.81%)
Mutual labels:  object-detection
Vision3d
Research platform for 3D object detection in PyTorch.
Stars: ✭ 177 (-0.56%)
Mutual labels:  object-detection
Face mask detection
Face mask detection system using Deep learning.
Stars: ✭ 168 (-5.62%)
Mutual labels:  object-detection
Dbnet
DBNet: A Large-Scale Dataset for Driving Behavior Learning, CVPR 2018
Stars: ✭ 172 (-3.37%)
Mutual labels:  cvpr2018
Dockerface
Face detection using deep learning.
Stars: ✭ 173 (-2.81%)
Mutual labels:  object-detection
Ciou
Complete-IoU (CIoU) Loss and Cluster-NMS for Object Detection and Instance Segmentation (YOLACT)
Stars: ✭ 166 (-6.74%)
Mutual labels:  object-detection
Object Detection Api
Yolov3 Object Detection implemented as APIs, using TensorFlow and Flask
Stars: ✭ 177 (-0.56%)
Mutual labels:  object-detection
Yolo V3 Tensorflow
👷 👷👷 YOLO V3(Tensorflow 1.x) 安全帽 识别 | 提供数据集下载和与预训练模型
Stars: ✭ 173 (-2.81%)
Mutual labels:  object-detection
Yolo v3 tutorial from scratch
Accompanying code for Paperspace tutorial series "How to Implement YOLO v3 Object Detector from Scratch"
Stars: ✭ 2,192 (+1131.46%)
Mutual labels:  object-detection
Ssd Tensorflow
A Single Shot MultiBox Detector in TensorFlow
Stars: ✭ 169 (-5.06%)
Mutual labels:  object-detection
Ownphotos Frontend
Stars: ✭ 171 (-3.93%)
Mutual labels:  object-detection
Deep Learning For Image Processing
deep learning for image processing including classification and object-detection etc.
Stars: ✭ 5,808 (+3162.92%)
Mutual labels:  object-detection
Deepstream Yolo
NVIDIA DeepStream SDK 5.1 configuration for YOLO models
Stars: ✭ 166 (-6.74%)
Mutual labels:  object-detection
Vip
Video Platform for Action Recognition and Object Detection in Pytorch
Stars: ✭ 175 (-1.69%)
Mutual labels:  object-detection
Map
mean Average Precision - This code evaluates the performance of your neural net for object recognition.
Stars: ✭ 2,324 (+1205.62%)
Mutual labels:  object-detection
Tf deformable net
Deformable convolution net on Tensorflow
Stars: ✭ 173 (-2.81%)
Mutual labels:  object-detection
Torchdistill
PyTorch-based modular, configuration-driven framework for knowledge distillation. 🏆18 methods including SOTA are implemented so far. 🎁 Trained models, training logs and configurations are available for ensuring the reproducibiliy.
Stars: ✭ 177 (-0.56%)
Mutual labels:  object-detection
Vdetlib
Video detection library
Stars: ✭ 177 (-0.56%)
Mutual labels:  object-detection
Yoloncs
YOLO object detector for Movidius Neural Compute Stick (NCS)
Stars: ✭ 176 (-1.12%)
Mutual labels:  object-detection

SIN

Structure Inference Net: Object Detection Using Scene-level Context and Instance-level Relationships. In CVPR 2018.(http://vipl.ict.ac.cn/uploadfile/upload/2018041318013480.pdf)

Requirements: software

  1. Requirements for Tensorflow 1.3.0 (see: Tensorflow)

  2. Python packages you might not have: cython, python-opencv, easydict

Installation (sufficient for the demo)

  1. Clone the SIN repository
# Make sure to clone with --recursive
git clone --recursive https://github.com/choasUp/SIN.git
  1. Build the Cython modules
cd $SIN_ROOT/lib
make

Demo

After successfully completing basic installation, you'll be ready to run the demo.

Wait ...

Training Model

  1. Download the training, validation, test data and VOCdevkit

    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar
    
  2. Extract all of these tars into one directory named VOCdevkit

    tar xvf VOCtrainval_06-Nov-2007.tar
    tar xvf VOCtest_06-Nov-2007.tar
    tar xvf VOCdevkit_08-Jun-2007.tar
    
  3. It should have this basic structure

    $VOCdevkit/                           # development kit
    $VOCdevkit/VOCcode/                   # VOC utility code
    $VOCdevkit/VOC2007                    # image sets, annotations, etc.
    # ... and several other directories ...
    
  4. Create symlinks for the PASCAL VOC dataset

    cd $SIN_ROOT/data
    ln -s $VOCdevkit VOCdevkit
    
  5. Download the pre-trained ImageNet models [Google Drive] [Dropbox]

     mv VGG_imagenet.npy $SIN_ROOT/data/pretrain_model/VGG_imagenet.npy
    
  6. [optional] Set learning rate and max iter

    vim experiments/scripts/faster_rcnn_end2end.sh 		# ITERS
    vim lib/fast/config.py 					# LR
    cd lib	 						# if you edit the code, make best
    make
    
  7. Set your GPU id, then run script to train and test model

    cd $SIN_ROOT
    export CUDA_VISIBLE_DEVICSE=0
    ./train.sh
    
  8. Test your dataset

    ./test_all.sh
    

The result of testing on PASCAL VOC 2007 (VGG net)

AP for aeroplane = 0.7853
AP for bicycle = 0.8045
AP for bird = 0.7456
AP for boat = 0.6657
AP for bottle = 0.6144
AP for bus = 0.8424
AP for car = 0.8663
AP for cat = 0.8894
AP for chair = 0.5803
AP for cow = 0.8466
AP for diningtable = 0.7171
AP for dog = 0.8578
AP for horse = 0.8626
AP for motorbike = 0.7802
AP for person = 0.7857
AP for pottedplant = 0.4869
AP for sheep = 0.7599
AP for sofa = 0.7351
AP for train = 0.8199
AP for tvmonitor = 0.7683
Mean AP = 0.7607

References

Faster R-CNN caffe version

Faster R-CNN tf version

Citation

Yong Liu, Ruiping Wang, Shiguang Shan, and Xilin Chen. Structure Inference Net: Object Detection Using Scene-level Context and Instance-level Relationships. In CVPR 2018.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].