All Projects → liruilong940607 → Ochumanapi

liruilong940607 / Ochumanapi

Licence: mit
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.

Programming Languages

139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Ochumanapi

Perception toolkit for sim2real training and validation
Stars: ✭ 208 (+23.81%)
Mutual labels:  segmentation, pose-estimation, detection
Variants of Vision Transformer and its downstream tasks
Stars: ✭ 124 (-26.19%)
Mutual labels:  detection, segmentation, pose-estimation
A pytorch implementation of Detectron. Both training from scratch and inferring directly from pretrained Detectron weights are available.
Stars: ✭ 2,805 (+1569.64%)
Mutual labels:  segmentation, pose-estimation, detection
Dlcv for beginners
Stars: ✭ 1,244 (+640.48%)
Mutual labels:  segmentation, detection
Deep Segmentation
CNNs for semantic segmentation using Keras library
Stars: ✭ 69 (-58.93%)
Mutual labels:  dataset, segmentation
Cnn Paper2
🎨 🎨 深度学习 卷积神经网络教程 :图像识别,目标检测,语义分割,实例分割,人脸识别,神经风格转换,GAN等🎨🎨
Stars: ✭ 77 (-54.17%)
Mutual labels:  segmentation, detection
Fast image augmentation library and an easy-to-use wrapper around other libraries. Documentation: Paper about the library:
Stars: ✭ 9,353 (+5467.26%)
Mutual labels:  segmentation, detection
Crop/Weed Field Image Dataset
Stars: ✭ 98 (-41.67%)
Mutual labels:  dataset, segmentation
[ECCV 2018] CCPD: a diverse and well-annotated dataset for license plate detection and recognition
Stars: ✭ 1,252 (+645.24%)
Mutual labels:  dataset, detection
Iros20 6d Pose Tracking
[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
Stars: ✭ 113 (-32.74%)
Mutual labels:  dataset, pose-estimation
An opensource lib. for vehicle vision applications (written by MATLAB), lane marking detection, road segmentation
Stars: ✭ 120 (-28.57%)
Mutual labels:  dataset, segmentation
Unet Segmentation
The U-Net Segmentation plugin for Fiji (ImageJ)
Stars: ✭ 62 (-63.1%)
Mutual labels:  segmentation, detection
Jacinto Ai Devkit
Training & Quantization of embedded friendly Deep Learning / Machine Learning / Computer Vision models
Stars: ✭ 49 (-70.83%)
Mutual labels:  segmentation, detection
3D point cloud datasets in HDF5 format, containing uniformly sampled 2048 points per shape.
Stars: ✭ 80 (-52.38%)
Mutual labels:  dataset, segmentation
Detects Chinese traffic police commanding poses 检测中国交警指挥手势
Stars: ✭ 49 (-70.83%)
Mutual labels:  dataset, pose-estimation
Caffe Model
Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
Stars: ✭ 1,258 (+648.81%)
Mutual labels:  segmentation, detection
Model Quantization
Collections of model quantization algorithms
Stars: ✭ 118 (-29.76%)
Mutual labels:  segmentation, detection
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+979.76%)
Mutual labels:  segmentation, detection
Deep Learning For Tracking And Detection
Collection of papers, datasets, code and other resources for object tracking and detection using deep learning
Stars: ✭ 1,920 (+1042.86%)
Mutual labels:  segmentation, detection
The Medical Detection Toolkit contains 2D + 3D implementations of prevalent object detectors such as Mask R-CNN, Retina Net, Retina U-Net, as well as a training and inference framework focused on dealing with medical images.
Stars: ✭ 917 (+445.83%)
Mutual labels:  segmentation, detection

OCHuman(Occluded Human) Dataset Api

Dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" [ProjectPage] [arXiv] @ CVPR2019.

  • News! 2019.06.14 Bug fixed: Val/Test annotation split is now matched to our paper, please update!
  • News! 2019.04.08 Codes for our paper is available now!

Samples of OCHuman Dataset

This dataset focus on heavily occluded human with comprehensive annotations including bounding-box, humans pose and instance mask. This dataset contains 13360 elaborately annotated human instances within 5081 images. With average 0.573 MaxIoU of each person, OCHuman is the most complex and challenging dataset related to human. Through this dataset, we want to emphasize occlusion as a challenging problem for researchers to study.


All the instances in this dataset are annotated by bounding-box. While not all of them have the keypoint/mask annotation. If you want to compare your results with ours in the paper, please use the subset that contains both keypoint and mask annotations (4731 images, 8110 persons).

bbox keypoint mask keypoint&mask bbox&keypoint&mask
#Images 5081 5081 4731 4731 4731
#Persons 13360 10375 8110 8110 8110
#mMaxIou 0.573 0.670 0.669 0.669 0.669


  • MaxIoU measures the severity of an object being occluded, which means the max IoU with other same category objects in a single image.
  • All instances in OCHuman with kpt/mask annotations are suffered by heavy occlusion. (MaxIou > 0.5)

Download Links

In the above link, we also provide the coco style annotations (val and test subset) so that you can run evaluation using cocoEval toolbox.

Update at 2019.06.14: Please download annotation files (*json) again to match the val/test split used in our paper.

Install API

git clone
cd OCHumanApi
make install

How to use

See Demo.ipynb

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].