All Projects → ankurhanda → nyuv2-meta-data

ankurhanda / nyuv2-meta-data

Licence: other
all the meta data needed for nyuv2

Projects that are alternatives of or similar to nyuv2-meta-data

Tsdf Fusion Python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
Stars: ✭ 464 (+368.69%)
Mutual labels:  rgbd
Deepdepthdenoising
This repo includes the source code of the fully convolutional depth denoising model presented in https://arxiv.org/pdf/1909.01193.pdf (ICCV19)
Stars: ✭ 96 (-3.03%)
Mutual labels:  rgbd
Suncgtoolbox
C++ based toolbox for the SUNCG dataset
Stars: ✭ 136 (+37.37%)
Mutual labels:  rgbd
Cilantro
A lean C++ library for working with point cloud data
Stars: ✭ 577 (+482.83%)
Mutual labels:  rgbd
Dmra
Code and Dataset for ICCV 2019 paper. "Depth-induced Multi-scale Recurrent Attention Network for Saliency Detection".
Stars: ✭ 76 (-23.23%)
Mutual labels:  rgbd
Cen
[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
Stars: ✭ 112 (+13.13%)
Mutual labels:  rgbd
Maskfusion
MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects
Stars: ✭ 404 (+308.08%)
Mutual labels:  rgbd
Arc Robot Vision
MIT-Princeton Vision Toolbox for Robotic Pick-and-Place at the Amazon Robotics Challenge 2017 - Robotic Grasping and One-shot Recognition of Novel Objects with Deep Learning.
Stars: ✭ 224 (+126.26%)
Mutual labels:  rgbd
Rgbd semantic segmentation pytorch
PyTorch Implementation of some RGBD Semantic Segmentation models.
Stars: ✭ 84 (-15.15%)
Mutual labels:  rgbd
Contactpose
Large dataset of hand-object contact, hand- and object-pose, and 2.9 M RGB-D grasp images.
Stars: ✭ 129 (+30.3%)
Mutual labels:  rgbd
Scannet
Stars: ✭ 860 (+768.69%)
Mutual labels:  rgbd
Peac
Fast Plane Extraction Using Agglomerative Hierarchical Clustering (AHC)
Stars: ✭ 51 (-48.48%)
Mutual labels:  rgbd
Rgbdplanedetection
RGBD plane detection and color-based plane refinement
Stars: ✭ 119 (+20.2%)
Mutual labels:  rgbd
3dmatch Toolbox
3DMatch - a 3D ConvNet-based local geometric descriptor for aligning 3D meshes and point clouds.
Stars: ✭ 571 (+476.77%)
Mutual labels:  rgbd
Gta Im Dataset
[ECCV-20] 3D human scene interaction dataset: https://people.eecs.berkeley.edu/~zhecao/hmp/index.html
Stars: ✭ 157 (+58.59%)
Mutual labels:  rgbd
Tsdf Fusion
Fuse multiple depth frames into a TSDF voxel volume.
Stars: ✭ 426 (+330.3%)
Mutual labels:  rgbd
Record3d
Accompanying library for the Record3D iOS app (https://record3d.app/). Allows you to receive RGBD stream from iOS devices with TrueDepth camera(s).
Stars: ✭ 102 (+3.03%)
Mutual labels:  rgbd
Volumetriccapture
A multi-sensor capture system for free viewpoint video.
Stars: ✭ 243 (+145.45%)
Mutual labels:  rgbd
3dgnn pytorch
3D Graph Neural Networks for RGBD Semantic Segmentation
Stars: ✭ 187 (+88.89%)
Mutual labels:  rgbd
Sunrgbd Meta Data
train test labels for sunrgbd
Stars: ✭ 127 (+28.28%)
Mutual labels:  rgbd

What does this repository contain?

This repository contains 13 class labels for both train and test dataset in NYUv2. This is to avoid any hassle involved in parsing the data from the .mat files. If you are looking to train a network to do 13 class segmentation from RGB data, then this repository can provide you both the training/test dataset as well the corresponding ground truth labels. However, if your networks needs additionally depth data (either depth image or DHA features) then you will need to download the dataset from the NYUv2 website (~2.8GB) as well as the corresponding toolbox. To summarise, this repository contains the following

  • The train_labels_13 contains the ground truth annotation for 13 classes for NYUv2 training dataset while test_labels_13 contains the ground truth for test dataset in NYUv2.

  • The training dataset (795 RGB images) can be obtained from nyu_train_rgb (277MB) while the test dataset (654 RGB images) can be obtained from nyu_test_rgb (227MB).

  • Important to remember that the label files are ordered but the rgb files are not. Though you can order the files using gprename.

How do I obtain the DHA features?

Look for this in a corresponding SUN RGB-D meta data repository. You will need rotation matrices for each training and test image. They are available here at camera_rotations_NYU.txt. These matrices are used to align the floor normal vector to the canonical gravity vector. There are 1449 rotation matrices in total and the indices for these matrices corresponding to training and test data are in splits.mat. Remember that labels are ordered i.e. training labels files are named with indices 1 to 795 and similarly for test dataset.

How do I benchmark?

getAccuracyNYU.m available in the SceneNetv1.0 repository allows you to obtain the avereage global and class accuracies.

What are the classes and where is the mapping form the class number to the class name?

The mapping is also available at SceneNetv1.0 repository.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].