All Projects → Sundrops → pytorch-faster-rcnn

Sundrops / pytorch-faster-rcnn

Licence: MIT License
No description or website provided.

Programming Languages

python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language
shell
77523 projects
Cuda
1817 projects

Projects that are alternatives of or similar to pytorch-faster-rcnn

Paddledetection
Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Stars: ✭ 5,799 (+12786.67%)
Mutual labels:  faster-rcnn, mask-rcnn
Mmdetection
OpenMMLab Detection Toolbox and Benchmark
Stars: ✭ 17,646 (+39113.33%)
Mutual labels:  faster-rcnn, mask-rcnn
smd
Simple mmdetection CPU inference
Stars: ✭ 27 (-40%)
Mutual labels:  faster-rcnn, mask-rcnn
keras-faster-rcnn
keras实现faster rcnn,end2end训练、预测; 持续更新中,见todo... ;欢迎试用、关注并反馈问题
Stars: ✭ 85 (+88.89%)
Mutual labels:  faster-rcnn
Faster RCNN tensorflow
Implementation of Faster RCNN for Vehicle Detection
Stars: ✭ 16 (-64.44%)
Mutual labels:  faster-rcnn
publications-tabelini-ijcnn-2019
Effortless Deep Training for Traffic Sign Detection Using Templates and Arbitrary Natural Images
Stars: ✭ 19 (-57.78%)
Mutual labels:  faster-rcnn
Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Stars: ✭ 8,046 (+17780%)
Mutual labels:  mask-rcnn
Skin-Cancer-Segmentation
Classification and Segmentation with Mask-RCNN of Skin Cancer using ISIC dataset
Stars: ✭ 61 (+35.56%)
Mutual labels:  mask-rcnn
UnderTheSea
Fish instance segmentation using Mask-RCNN
Stars: ✭ 30 (-33.33%)
Mutual labels:  mask-rcnn
bird species classification
Supervised Classification of bird species 🐦 in high resolution images, especially for, Himalayan birds, having diverse species with fairly low amount of labelled data
Stars: ✭ 59 (+31.11%)
Mutual labels:  mask-rcnn
mrcnn serving ready
🛠 Mask R-CNN Keras to Tensorflow and TFX models + Serving models using TFX GRPC & RESTAPI
Stars: ✭ 96 (+113.33%)
Mutual labels:  mask-rcnn
Faster-RCNN-LocNet
A simplified implementation of paper : Improved Localization Accuracy by LocNet for Faster R-CNN Based Text Detection
Stars: ✭ 25 (-44.44%)
Mutual labels:  faster-rcnn
Depth-VRD
Improving Visual Relation Detection using Depth Maps (ICPR 2020)
Stars: ✭ 33 (-26.67%)
Mutual labels:  faster-rcnn
rt-mrcnn
Real time instance segmentation with Mask R-CNN, live from webcam feed.
Stars: ✭ 47 (+4.44%)
Mutual labels:  mask-rcnn
DeepFashion MRCNN
Fashion Item segmentation with Mask_RCNN
Stars: ✭ 29 (-35.56%)
Mutual labels:  mask-rcnn
Object-Detection-And-Tracking
Target detection in the first frame and Tracking target by SiamRPN.
Stars: ✭ 33 (-26.67%)
Mutual labels:  faster-rcnn
object-tracking
Multiple Object Tracking System in Keras + (Detection Network - YOLO)
Stars: ✭ 89 (+97.78%)
Mutual labels:  faster-rcnn
Mask-YOLO
Inspired from Mask R-CNN to build a multi-task learning, two-branch architecture: one branch based on YOLOv2 for object detection, the other branch for instance segmentation. Simply tested on Rice and Shapes. MobileNet supported.
Stars: ✭ 100 (+122.22%)
Mutual labels:  mask-rcnn
tf-faster-rcnn
Tensorflow 2 Faster-RCNN implementation from scratch supporting to the batch processing with MobileNetV2 and VGG16 backbones
Stars: ✭ 88 (+95.56%)
Mutual labels:  faster-rcnn
lightDenseYOLO
A real-time object detection app based on lightDenseYOLO Our lightDenseYOLO is the combination of two components: lightDenseNet as the CNN feature extractor and YOLO v2 as the detection module
Stars: ✭ 20 (-55.56%)
Mutual labels:  faster-rcnn

pytorch-faster-rcnn

fork自ruotianluo/pytorch-faster-rcnn A pytorch implementation of faster RCNN detection framework based on Xinlei Chen's tf-faster-rcnn. Xinlei Chen's repository is based on the python Caffe implementation of faster RCNN available here.

特别说明

此仓库是在pytorch-faster-rcnn基础下修改,网络结构加入了mask分支,实现了无fpn的mask rcnn. RoIAlign是用的类似tf-faster-rcnn的做法,和kaiming论文有一点点出入。 experiments/cfgs/vgg16.yml

  • faster rcnn 复现mAP: 0.708
  • 无fpn的mask rcnn
  • Light-Head R-CNN 复现mAP: 0.711
# 为了融合全局特征,在roi pooling前加了类似U-Net的东西
ZDF_GAUSSIAN: False
ZDF: True
# 在原有分类基础上加了细分类,目的是通过multi-task提升原有的分类、检测和mask
SUB_CATEGORY: False
LOSS_SUB_CATEGORY_W: 0.5
# 这两个参数应对不同的POOLING_MODE
# pyramid_crop_sum金字塔roi(1,1.5,2)
# pyramid_crop金字塔roi cat后降维
# 其他模式可能使最终的输出channel不为512,所以FC6_IN_CHANNEL要随之改动
POOLING_MODE: crop
# 是否做mask分支
DO_PARSING: True
# 我们的训练是分两步,一步是先把检测(DO_PARSING: False)训练好
# 固定检测的所有参数,只训练mask分支
# 为了方便,训练好检测好把最终模型改名为vgg16,放到data/imagenet_weights,然后设置FIX_FEAT: True
FIX_FEAT: True
# light rcnn 输出的feature是k*7*7,此处k设置为10 10*7*7=490
# 且去掉了一个fc的隐层,只留一个2048的fc隐层(无dropout)
# 且large kernel cmid设置为128
LIGHT_RCNN: True
FC6_IN_CHANNEL: 490
FC7_OUT_CHANNEL: 2048
# pascal_voc 数据集 vgg16 网络结构 default 标签
 ./experiments/scripts/train_faster_rcnn.sh 0 pascal_voc vgg16 default

Train your own model

  1. Download pre-trained models and weights. The current code support VGG16 and Resnet V1 models. Pre-trained models are provided by pytorch-vgg and pytorch-resnet (the ones with caffe in the name), you can download the pre-trained models and set them in the data/imagenet_weights folder. For example for VGG16 model, you can set up like:

    mkdir -p data/imagenet_weights
    cd data/imagenet_weights
    python # open python in terminal and run the following Python code
    import torch
    from torch.utils.model_zoo import load_url
    from torchvision import models
    
    sd = load_url("https://s3-us-west-2.amazonaws.com/jcjohns-models/vgg16-00b39a1b.pth")
    sd['classifier.0.weight'] = sd['classifier.1.weight']
    sd['classifier.0.bias'] = sd['classifier.1.bias']
    del sd['classifier.1.weight']
    del sd['classifier.1.bias']
    
    sd['classifier.3.weight'] = sd['classifier.4.weight']
    sd['classifier.3.bias'] = sd['classifier.4.bias']
    del sd['classifier.4.weight']
    del sd['classifier.4.bias']
    
    torch.save(sd, "vgg16.pth")
    cd ../..

    For Resnet101, you can set up like:

    mkdir -p data/imagenet_weights
    cd data/imagenet_weights
    # download from my gdrive (link in pytorch-resnet)
    mv resnet101-caffe.pth res101.pth
    cd ../..
  2. Train (and test, evaluation)

./experiments/scripts/train_faster_rcnn.sh [GPU_ID] [DATASET] [NET] [TAG]
# GPU_ID is the GPU you want to test on
# NET in {vgg16, res50, res101, res152} is the network arch to use
# DATASET {pascal_voc, pascal_voc_0712, coco} is defined in train_faster_rcnn.sh
# Examples:
./experiments/scripts/train_faster_rcnn.sh 0 pascal_voc vgg16 default
./experiments/scripts/train_faster_rcnn.sh 1 coco res101 yourtag

Note: Please double check you have deleted soft link to the pre-trained models before training. If you find NaNs during training, please refer to Issue 86. Also if you want to have multi-gpu support, check out Issue 121.

  1. Visualization with Tensorboard
tensorboard --logdir=tensorboard/vgg16/voc_2007_trainval/ --port=7001 &
tensorboard --logdir=tensorboard/vgg16/coco_2014_train+coco_2014_valminusminival/ --port=7002 &
  1. Test and evaluate
./experiments/scripts/test_faster_rcnn.sh [GPU_ID] [DATASET] [NET]
# GPU_ID is the GPU you want to test on
# NET in {vgg16, res50, res101, res152} is the network arch to use
# DATASET {pascal_voc, pascal_voc_0712, coco} is defined in test_faster_rcnn.sh
# Examples:
./experiments/scripts/test_faster_rcnn.sh 0 pascal_voc vgg16
./experiments/scripts/test_faster_rcnn.sh 1 coco res101
  1. You can use tools/reval.sh for re-evaluation

By default, trained networks are saved under:

output/[NET]/[DATASET]/default/

Test outputs are saved under:

output/[NET]/[DATASET]/default/[SNAPSHOT]/

Tensorboard information for train and validation is saved under:

tensorboard/[NET]/[DATASET]/default/
tensorboard/[NET]/[DATASET]/default_val/
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].