All Projects → hkust-vgd → shrec17

hkust-vgd / shrec17

Licence: MIT license
Supplementary code for SHREC 2017 RGB-D Object-to-CAD Retrieval track

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to shrec17

vitrivr-ng
vitrivr NG is a web-based user interface for searching and browsing mixed multimedia collections. It uses cineast as a backend
Stars: ✭ 14 (-48.15%)
Mutual labels:  retrieval
palladian
Palladian is a Java-based toolkit with functionality for text processing, classification, information extraction, and data retrieval from the Web.
Stars: ✭ 32 (+18.52%)
Mutual labels:  retrieval
FLOBOT
EU funded Horizon 2020 project
Stars: ✭ 20 (-25.93%)
Mutual labels:  rgbd
OpenDialog
An Open-Source Package for Chinese Open-domain Conversational Chatbot (中文闲聊对话系统,一键部署微信闲聊机器人)
Stars: ✭ 94 (+248.15%)
Mutual labels:  retrieval
RGBD-semantic-segmentation
A paper list of RGBD semantic segmentation (processing)
Stars: ✭ 264 (+877.78%)
Mutual labels:  rgbd
monodepth
Python ROS depth estimation from RGB image based on code from the paper "High Quality Monocular Depth Estimation via Transfer Learning"
Stars: ✭ 41 (+51.85%)
Mutual labels:  rgbd
cherche
📑 Neural Search
Stars: ✭ 196 (+625.93%)
Mutual labels:  retrieval
awesome-visual-localization-papers
The relocalization task aims to estimate the 6-DoF pose of a novel (unseen) frame in the coordinate system given by the prior model of the world.
Stars: ✭ 60 (+122.22%)
Mutual labels:  retrieval
RGBD-SOD-datasets
All those partitioned RGB-D Saliency Datasets we collected are shared in ready-to-use manner.
Stars: ✭ 46 (+70.37%)
Mutual labels:  rgbd
image embeddings
Using efficientnet to provide embeddings for retrieval
Stars: ✭ 107 (+296.3%)
Mutual labels:  retrieval
deep recommenders
Deep Recommenders
Stars: ✭ 214 (+692.59%)
Mutual labels:  retrieval
nyuv2-meta-data
all the meta data needed for nyuv2
Stars: ✭ 99 (+266.67%)
Mutual labels:  rgbd
RGBDAcquisition
A uniform library wrapper for input from V4L2,Freenect,OpenNI,OpenNI2,DepthSense,Intel Realsense,OpenGL simulations and other types of video and depth input..
Stars: ✭ 56 (+107.41%)
Mutual labels:  rgbd
COIL
NAACL2021 - COIL Contextualized Lexical Retriever
Stars: ✭ 86 (+218.52%)
Mutual labels:  retrieval
3DGNN
No description or website provided.
Stars: ✭ 56 (+107.41%)
Mutual labels:  rgbd
keras rmac
RMAC implementation in Keras
Stars: ✭ 80 (+196.3%)
Mutual labels:  retrieval
rgbd ptam
Python implementation of RGBD-PTAM algorithm
Stars: ✭ 65 (+140.74%)
Mutual labels:  rgbd
rgbd scribble benchmark
RGB-D Scribble-based Segmentation Benchmark
Stars: ✭ 24 (-11.11%)
Mutual labels:  rgbd
plexus
Plexus - Interactive Emotion Visualization based on Social Media
Stars: ✭ 27 (+0%)
Mutual labels:  retrieval
beir
A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.
Stars: ✭ 738 (+2633.33%)
Mutual labels:  retrieval

SHREC 2017: RGB-D Object-to-CAD Retrieval

This repository contains detailed description of the dataset and supplemental code for SHREC 2017 track: RGB-D Object-to-CAD Retrieval. In this track, our goal is to retrieve a CAD model from ShapeNet using a SceneNN model as input.

Download

Dataset

In this dataset, we manually group 1667 SceneNN objects and 3308 ShapeNet models into 20 categories. Only indoor objects that are both available in SceneNN and Shapenet dataset are selected. The object distribution in this dataset are shown below.

Object distribution in the dataset

By following the idea in the ShapeNet dataset, we split our dataset into training, validation, and test set. The split ratio is 50/25/25%. All data could be downloaded here.

The objects in both SceneNN and ShapeNet are grouped into categories and subcategories, which are stored in CSV files. All categories and subcategories for training and validation are provided in train.csv and validation.csv. The test.csv has categories removed for evaluation purposes. In general, we will first consider categories in the evaluation. The subcategories could be used for more rigorous evaluation after using categories.

Query data

Each SceneNN object is stored in 3D as a triangle mesh in PLY format. Each mesh vertex has a world position, normal, and color value. Additional information in 2D is also included such as (a) camera pose, (b) color image, (c) depth image, (d) label image for each RGB-D frame that contains the object.

Each SceneNN object has an ID formatted as <sceneID>_<labelID>, where sceneID is a three-digit scene number, and labelID is an unsigned integer that denotes a label. For example, 286_224114 identifies label 224114 in scene 286.

It is perhaps more convenient to work with the 3D data as they are more compact and manageable. For researchers who are interested in the 2.5D color and depth frames, you can:

  • Download item (a), (b), and (c) in the SceneNN scene repository here. All images for each scene are packed in an ONI video file, which can be extracted using the playback tool here. Note that to store images for all scenes, a hard drive with free space about 500 GB is preferred.

  • Download the labels in item (d) here. To extract a binary mask for each object, use the (mask_from_label) code here.

Target data

Each ShapeNet object is stored in 3D as a triangle mesh in OBJ format, with color in a separate material file in MTL format, and (optional) textures. The ShapeNet objects are a subset of ShapeNetSem. All object IDs are the same as those in the original ShapeNet dataset.

Evaluation

We provide Python evaluation scripts for all of the metrics. You can find an example retrieval results in the folder. Please check out this reprository regularly for more updates. All bug reports and suggestion are welcomed.

Usage:

python eval.py examples/

Tools

To assist dataset investigation, we provide a model viewer tool (Windows 64-bit only) which can display SceneNN and ShapeNet objects in categories:

Dataset viewer

Please download the viewer here.

Acknowledgement

The CAD models in this dataset are extracted from ShapeNet, a richly annotated and large-scale dataset of 3D shapes by Stanford.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].