All Projects → VCL3D → StructureNet

VCL3D / StructureNet

Licence: other
Markerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to StructureNet

costmap depth camera
This is a costmap plugin for costmap_2d pkg. This plugin supports multiple depth cameras and run in real time.
Stars: ✭ 26 (-31.58%)
Mutual labels:  depth-camera, realsense
realsense-processing
Intel RealSense 2 support for the Processing framework.
Stars: ✭ 70 (+84.21%)
Mutual labels:  realsense, realsense-camera
OpenDepthSensor
Open library to support Kinect V1 & V2 & Azure, RealSense and OpenNI-compatible sensors.
Stars: ✭ 61 (+60.53%)
Mutual labels:  depth-camera, realsense
LiDARTag
This is a package for LiDARTag, described in paper: LiDARTag: A Real-Time Fiducial Tag System for Point Clouds
Stars: ✭ 161 (+323.68%)
Mutual labels:  calibration, extrinsic-calibration
Handeye calib camodocal
Easy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa.
Stars: ✭ 364 (+857.89%)
Mutual labels:  calibration, rgbd
2019-UGRP-DPoom
2019 DGIST DPoom project under UGRP : SBC and RGB-D camera based full autonomous driving system for mobile robot with indoor SLAM
Stars: ✭ 35 (-7.89%)
Mutual labels:  depth-camera, realsense2
ros openvino
A ROS package to wrap openvino inference engine and get it working with Myriad and GPU
Stars: ✭ 57 (+50%)
Mutual labels:  realsense, realsense2
maplab realsense
Simple ROS wrapper for the Intel RealSense driver with a focus on the ZR300.
Stars: ✭ 22 (-42.11%)
Mutual labels:  rgbd, realsense
Volumetriccapture
A multi-sensor capture system for free viewpoint video.
Stars: ✭ 243 (+539.47%)
Mutual labels:  calibration, rgbd
Intel-Realsense-Hand-Toolkit-Unity
Intel Realsense Toolkit for Hand tracking and Gestural Recognition on Unity3D
Stars: ✭ 72 (+89.47%)
Mutual labels:  depth-camera, realsense
vignetting calib
No description or website provided.
Stars: ✭ 41 (+7.89%)
Mutual labels:  calibration
jsartoolkitNFT
jsartolkitNFT is a smaller version of jsartoolkit5 with only NFT support
Stars: ✭ 110 (+189.47%)
Mutual labels:  markerless
LFToolbox
Light Field Toolbox for MATLAB
Stars: ✭ 74 (+94.74%)
Mutual labels:  calibration
Evp3
Volumetric visual effects program used in Unite Shanghai 2019
Stars: ✭ 81 (+113.16%)
Mutual labels:  realsense
rgbd scribble benchmark
RGB-D Scribble-based Segmentation Benchmark
Stars: ✭ 24 (-36.84%)
Mutual labels:  rgbd
surfacecast
SurfaceCast: send background-subtracted depth camera video via GStreamer (with optional perspective correction)
Stars: ✭ 22 (-42.11%)
Mutual labels:  realsense
rscvdnn
Test program for OpenCV DNN object detection with RealSense camera
Stars: ✭ 52 (+36.84%)
Mutual labels:  realsense
OpenCVB
OpenCV .Net application supporting several RGBD cameras - Kinect, Intel RealSense, Luxonis Oak-D, Mynt Eye D 1000, and StereoLabs ZED 2
Stars: ✭ 60 (+57.89%)
Mutual labels:  kinect4azure
pygac
A python package to read and calibrate NOAA and Metop AVHRR GAC and LAC data
Stars: ✭ 14 (-63.16%)
Mutual labels:  calibration
mimic
mimic calibration
Stars: ✭ 18 (-52.63%)
Mutual labels:  calibration

Deep Soft Procrustes for Markerless Volumetric Sensor Alignment

An easy to use depth sensor extrinsics calibration method. It is integrated and being used in a robust Volumetric Capture system.

Paper Conference Project Page

Abstract

With the advent of consumer grade depth sensors, low-cost volumetric capture systems are easier to deploy. Their wider adoption though depends on their usability and by extension on the practicality of spatially aligning multiple sensors. Most existing alignment approaches employ visual patterns, e.g. checkerboards, or markers and require high user involvement and technical knowledge. More user-friendly and easier-to-use approaches rely on markerless methods that exploit geometric patterns of a physical structure. However, current SoA approaches are bounded by restrictions in the placement and the number of sensors. In this work, we improve markerless data-driven correspondence estimation to achieve more robust and flexible multi-sensor spatial alignment. In particular, we incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one. This is accomplished by a soft, differentiable procrustes analysis that regularizes the segmentation and achieves higher extrinsic calibration performance in expanded sensor placement configurations, while being unrestricted by the number of sensors of the volumetric capture system. Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.

Requirements

Installation

(New python enviroment is highly recommended)

  • Install required packages with pip install -r requirements.txt Only for training
  • Install tinyobjloader by cloning/downloading this repository, navigate to python folder and run python setup.py install
  • Install our custom patch for disabling multisampling in pyrender
    • Download UnixUtils and add the executable to path
    • Execute patch.exe -u <path_to renderer.py> pyrender_patch/renderer.diff
    • Execute patch.exe -u <path_to constants.py> pyrender_patch/constants.diff
    • Execute patch.exe -u <path_to camera.py> pyrender_patch/camera.diff

Download the model

We provide a pretrained model here for inference purposes.

Inference

In order to run our code, a pretrained model must be present either from a training or it can be downloaded here. Once every requirement is installed, simply rum python inference.py [options...]

Important options

--input_path : directory which contains depthmaps (in .pgm format) (see example of input data )

--output_path : directory where results will be saved

--scale : multiplication factor that converts depthmap data to meters

--saved_params_path : path to the downloaded model

In order to see all available options with a brief description, please execute python inference.py -h

Training

In order to train our model from scratch, one has to download backgrounds that are used in training time for augmentation. TBD: upload and add links. Once every requirement is installed and backgrounds are downloaded, it is time to train our model. Execute python main.py -h to see all available options.

Video

A video explaining our method, accompanying our submission to IEEEVR 2020, can be found at https://www.youtube.com/watch?v=0l5neSMt-2Y

Citation

If you use this code and/or models, please cite the following:

@inproceedings{sterzentsenko2020deepsoftprocrustes,
  title={Deep Soft Procrustes for Markerless Volumetric Sensor Alignment},
  author={Vladimiros Sterzentsenko and Alexandros Doumanoglou and Spyridon Thermos and Nikolaos Zioulis and and Dimitrios Zarpalas and Petros Daras},
  booktitle={2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
  year={2020},
  organization={IEEE}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].