All Projects → pedropro → OMG_Depth_Fusion

pedropro / OMG_Depth_Fusion

Licence: MIT license
Probabilistic depth fusion based on Optimal Mixture of Gaussians for depth cameras

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to OMG Depth Fusion

arkit-depth-renderer
Displays the depth values received by the front-facing camera.
Stars: ✭ 48 (-7.69%)
Mutual labels:  depth-image, depth-camera
DiverseDepth
The code and data of DiverseDepth
Stars: ✭ 150 (+188.46%)
Mutual labels:  depth-estimation
OpenDepthSensor
Open library to support Kinect V1 & V2 & Azure, RealSense and OpenNI-compatible sensors.
Stars: ✭ 61 (+17.31%)
Mutual labels:  depth-camera
Visualizing-CNNs-for-monocular-depth-estimation
official implementation of "Visualization of Convolutional Neural Networks for Monocular Depth Estimation"
Stars: ✭ 120 (+130.77%)
Mutual labels:  depth-estimation
AI Learning Hub
AI Learning Hub for Machine Learning, Deep Learning, Computer Vision and Statistics
Stars: ✭ 53 (+1.92%)
Mutual labels:  gaussian-mixture-models
Dual-CNN-Models-for-Unsupervised-Monocular-Depth-Estimation
Dual CNN Models for Unsupervised Monocular Depth Estimation
Stars: ✭ 36 (-30.77%)
Mutual labels:  depth-estimation
diode-devkit
DIODE Development Toolkit
Stars: ✭ 58 (+11.54%)
Mutual labels:  depth-estimation
costmap depth camera
This is a costmap plugin for costmap_2d pkg. This plugin supports multiple depth cameras and run in real time.
Stars: ✭ 26 (-50%)
Mutual labels:  depth-camera
pais-mvs
Multi-view stereo image-based 3D reconstruction
Stars: ✭ 55 (+5.77%)
Mutual labels:  depth-estimation
Normal-Assisted-Stereo
[CVPR 2020] Normal Assisted Stereo Depth Estimation
Stars: ✭ 95 (+82.69%)
Mutual labels:  depth-estimation
MachineLearning
Implementations of machine learning algorithm by Python 3
Stars: ✭ 16 (-69.23%)
Mutual labels:  gaussian-mixture-models
StructureNet
Markerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).
Stars: ✭ 38 (-26.92%)
Mutual labels:  depth-camera
DSGN
DSGN: Deep Stereo Geometry Network for 3D Object Detection (CVPR 2020)
Stars: ✭ 276 (+430.77%)
Mutual labels:  depth-estimation
project-defude
Refocus an image just by clicking on it with no additional data
Stars: ✭ 69 (+32.69%)
Mutual labels:  depth-estimation
SGDepth
[ECCV 2020] Self-Supervised Monocular Depth Estimation: Solving the Dynamic Object Problem by Semantic Guidance
Stars: ✭ 162 (+211.54%)
Mutual labels:  depth-estimation
kdsb17
Gaussian Mixture Convolutional AutoEncoder applied to CT lung scans from the Kaggle Data Science Bowl 2017
Stars: ✭ 18 (-65.38%)
Mutual labels:  gaussian-mixture-models
PyBGMM
Bayesian inference for Gaussian mixture model with some novel algorithms
Stars: ✭ 51 (-1.92%)
Mutual labels:  gaussian-mixture-models
FisheyeDistanceNet
FisheyeDistanceNet
Stars: ✭ 33 (-36.54%)
Mutual labels:  depth-estimation
Semantic-Mono-Depth
Geometry meets semantics for semi-supervised monocular depth estimation - ACCV 2018
Stars: ✭ 98 (+88.46%)
Mutual labels:  depth-estimation
deepgmr
PyTorch implementation of DeepGMR: Learning Latent Gaussian Mixture Models for Registration (ECCV 2020 spotlight)
Stars: ✭ 97 (+86.54%)
Mutual labels:  gaussian-mixture-models

Optimal Mixture of Gaussians for Depth Fusion

Implementation of the filter proposed in:
P.F. Proença and Y. Gao, Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty, Robotics and Autonomous Systems, 2018

The OMG filter is aimed at denoising and hole-filling the depth maps given by a depth sensor, but also (more importantly) capturing the depth uncertainty though spatio-temporal observations, which proved to be useful as an input to the probabilistic visual odometry system proposed in the related paper.

The code runs the OMG on RGB-D dataset sequences (e.g. ICL_NUIM and TUM). Some examples are included in this repository. This is a stripped down version: It does not include the VO system, thus instead of using the VO frame-to-frame pose, we rely on the groundtruth poses to transform the old measurements (the registered point cloud) to the current frame. Also, the pose uncertainty condition was removed. The sensor model employed is specific to Kinect v1. Thus, to use with other sensors (e.g. ToF cameras), this should be changed.

Dependencies

  • OpenCV
  • Eigen3

Ubuntu Instructions

Tested with Ubuntu 14.04

To compile, inside the directory ./OMG type:

mkdir build
cd build
cmake ../
make

To run the executable type:

./test_OMG 4 9 TUM_RGBD sitting_static_short

or

./test_OMG 4 9 ICL_NUIM living_room_traj0_frei_png_short

Windows Instructions

Tested configuration: Windows 8.1 with Visual Studio 10 & 12

This version includes already a VC11 project. Just make the necessary changes to link the project with OpenCV and Eigen.

Usage

General Format ./test_OMG <max_nr_frames> <consistency_threshold> <dataset_directory> <sequence_name>

  • max_nr_frames: is the window size - 1, i.e., the maximum number of previous frames used for fusion.
  • consistency threshold: is used to avoid fusing inconsistent measurements, if the squared distance between a new measurement and the current estimate is more than current uncertainty times this threshold than the new measurement is ignored. Setting this higher may produce better quality but will capture less the temporal uncertainty.

Three windows should pop up, showing the raw depth, the fused depth and the respective fused uncertainty.

Data

Two short RGB-D sequences are included as examples. To add more sequences, download from: TUM_RGBD or ICL-NUIM and place them inside the directory ./Data the same way it has been done for the examples. Then add the respective camera parameters in the same format as the examples as calib_params.xml

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].