Arc Robot VisionMIT-Princeton Vision Toolbox for Robotic Pick-and-Place at the Amazon Robotics Challenge 2017 - Robotic Grasping and One-shot Recognition of Novel Objects with Deep Learning.
3dgnn pytorch3D Graph Neural Networks for RGBD Semantic Segmentation
Gta Im Dataset[ECCV-20] 3D human scene interaction dataset: https://people.eecs.berkeley.edu/~zhecao/hmp/index.html
ContactposeLarge dataset of hand-object contact, hand- and object-pose, and 2.9 M RGB-D grasp images.
Cen[NeurIPS 2020] Code release for paper "Deep Multimodal Fusion by Channel Exchanging" (In PyTorch)
Record3dAccompanying library for the Record3D iOS app (https://record3d.app/). Allows you to receive RGBD stream from iOS devices with TrueDepth camera(s).
DeepdepthdenoisingThis repo includes the source code of the fully convolutional depth denoising model presented in https://arxiv.org/pdf/1909.01193.pdf (ICCV19)
DmraCode and Dataset for ICCV 2019 paper. "Depth-induced Multi-scale Recurrent Attention Network for Saliency Detection".
PeacFast Plane Extraction Using Agglomerative Hierarchical Clustering (AHC)
CilantroA lean C++ library for working with point cloud data
3dmatch Toolbox3DMatch - a 3D ConvNet-based local geometric descriptor for aligning 3D meshes and point clouds.
Tsdf FusionFuse multiple depth frames into a TSDF voxel volume.
MaskfusionMaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects
Co FusionCo-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects
Handeye calib camodocalEasy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa.
Intrinsic3dIntrinsic3D - High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting (ICCV 2017)
Open3d MlAn extension of Open3D to address 3D Machine Learning tasks
Apc Vision ToolboxMIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object segmentation and 6D object pose estimation.
TorchSSCImplement some state-of-the-art methods of Semantic Scene Completion (SSC) task in PyTorch. [1] 3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior (CVPR 2020)
ACVR2017An Innovative Salient Object Detection Using Center-Dark Channel Prior
DeTDataset and Code for the paper "DepthTrack: Unveiling the Power of RGBD Tracking" (ICCV2021), and "Depth-only Object Tracking" (BMVC2021)
maplab realsenseSimple ROS wrapper for the Intel RealSense driver with a focus on the ZR300.
DPANetDPANet : Depth Potentiality-Aware Gated Attention Network for RGB-D Salient Object Detection
referit3dCode accompanying our ECCV-2020 paper on 3D Neural Listeners.
GeobitNonrigidDescriptor ICCV 2019C++ implementation of the nonrigid descriptor Geobit presented at ICCV 2019 "GEOBIT: A Geodesic-Based Binary Descriptor Invariant to Non-Rigid Deformations for RGB-D Images"
SurfConvCode & data for SurfConv paper at CVPR18
ESANetESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis
StructureNetMarkerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).
RGB-D-SLAMWork in Progress. A SLAM implementation based on plane and superquadric tracking.
dvo pythonCoding dense visual odometry in a little more than a night (yikes)!
shrec17Supplementary code for SHREC 2017 RGB-D Object-to-CAD Retrieval track
3DGNNNo description or website provided.
FLOBOTEU funded Horizon 2020 project
RGBDAcquisitionA uniform library wrapper for input from V4L2,Freenect,OpenNI,OpenNI2,DepthSense,Intel Realsense,OpenGL simulations and other types of video and depth input..
monodepthPython ROS depth estimation from RGB image based on code from the paper "High Quality Monocular Depth Estimation via Transfer Learning"
rgbd ptamPython implementation of RGBD-PTAM algorithm
RGBD-SOD-datasetsAll those partitioned RGB-D Saliency Datasets we collected are shared in ready-to-use manner.