All Projects → VCL3D → Deepdepthdenoising

VCL3D / Deepdepthdenoising

Licence: mit
This repo includes the source code of the fully convolutional depth denoising model presented in https://arxiv.org/pdf/1909.01193.pdf (ICCV19)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Deepdepthdenoising

Openni2 camera
ROS wrapper for openni 2.0
Stars: ✭ 36 (-62.5%)
Mutual labels:  rgbd
Repo 2017
Python codes in Machine Learning, NLP, Deep Learning and Reinforcement Learning with Keras and Theano
Stars: ✭ 1,123 (+1069.79%)
Mutual labels:  autoencoder
Image similarity
PyTorch Blog Post On Image Similarity Search
Stars: ✭ 80 (-16.67%)
Mutual labels:  autoencoder
Recoder
Large scale training of factorization models for Collaborative Filtering with PyTorch
Stars: ✭ 46 (-52.08%)
Mutual labels:  autoencoder
Basic nns in frameworks
several basic neural networks[mlp, autoencoder, CNNs, recurrentNN, recursiveNN] implements under several NN frameworks[ tensorflow, pytorch, theano, keras]
Stars: ✭ 58 (-39.58%)
Mutual labels:  autoencoder
Molencoder
Molecular AutoEncoder in PyTorch
Stars: ✭ 69 (-28.12%)
Mutual labels:  autoencoder
Website Fingerprinting
Automatic Feature Generation for Website Fingerprinting
Stars: ✭ 20 (-79.17%)
Mutual labels:  autoencoder
Niftynet
[unmaintained] An open-source convolutional neural networks platform for research in medical image analysis and image-guided therapy
Stars: ✭ 1,276 (+1229.17%)
Mutual labels:  autoencoder
Collaborative Deep Learning For Recommender Systems
The hybrid model combining stacked denoising autoencoder with matrix factorization is applied, to predict the customer purchase behavior in the future month according to the purchase history and user information in the Santander dataset.
Stars: ✭ 60 (-37.5%)
Mutual labels:  autoencoder
Dmra
Code and Dataset for ICCV 2019 paper. "Depth-induced Multi-scale Recurrent Attention Network for Saliency Detection".
Stars: ✭ 76 (-20.83%)
Mutual labels:  rgbd
Lipreading
Stars: ✭ 49 (-48.96%)
Mutual labels:  autoencoder
Peac
Fast Plane Extraction Using Agglomerative Hierarchical Clustering (AHC)
Stars: ✭ 51 (-46.87%)
Mutual labels:  rgbd
Pt Sdae
PyTorch implementation of SDAE (Stacked Denoising AutoEncoder)
Stars: ✭ 72 (-25%)
Mutual labels:  autoencoder
Rnn Vae
Variational Autoencoder with Recurrent Neural Network based on Google DeepMind's "DRAW: A Recurrent Neural Network For Image Generation"
Stars: ✭ 39 (-59.37%)
Mutual labels:  autoencoder
Sdne Keras
Keras implementation of Structural Deep Network Embedding, KDD 2016
Stars: ✭ 83 (-13.54%)
Mutual labels:  autoencoder
Pytorch Mnist Vae
Stars: ✭ 32 (-66.67%)
Mutual labels:  autoencoder
Codeslam
Implementation of CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM paper (https://arxiv.org/pdf/1804.00874.pdf)
Stars: ✭ 64 (-33.33%)
Mutual labels:  autoencoder
Pytorch sac ae
PyTorch implementation of Soft Actor-Critic + Autoencoder(SAC+AE)
Stars: ✭ 94 (-2.08%)
Mutual labels:  autoencoder
Rgbd semantic segmentation pytorch
PyTorch Implementation of some RGBD Semantic Segmentation models.
Stars: ✭ 84 (-12.5%)
Mutual labels:  rgbd
Aialpha
Use unsupervised and supervised learning to predict stocks
Stars: ✭ 1,191 (+1140.63%)
Mutual labels:  autoencoder

Self-supervised Deep Depth Denoising

Paper Conference Project Page

Created by Vladimiros Sterzentsenko*, Leonidas Saroglou*, Anargyros Chatzitofis*, Spyridon Thermos*, Nikolaos Zioulis*, Alexandros Doumanoglou, Dimitrios Zarpalas, and Petros Daras from the Visual Computing Lab @ CERTH

poisson

About this repo

This repo includes the training and evaluation scripts for the fully convolutional autoencoder presented in our paper "Self-Supervised Deep Depth Denoising" (to appear in ICCV 2019). The autoencoder is trained in a self-supervised manner, exploiting RGB-D data captured by Intel RealSense D415 sensors. During inference, the model is used for depthmap denoising, without the need of RGB data.

Installation

The code has been tested with the following setup:

  • Pytorch 1.0.1
  • Python 3.7.2
  • CUDA 9.1
  • Visdom

Model Architecture

network

Encoder: 9 CONV layers, input is downsampled 3 times prior to the latent space, number of channels doubled after each downsampling.

Bottleneck: 2 residual blocks, ELU-CONV-ELU-CONV structure, pre-activation.

Decoder: 9 CONV layers, input is upsampled 3 times using interpolation followed by a CONV layer.

Train

To see the available training parameters:

python train.py -h

Training example:

python train.py --batchsize 2 --epochs 20 --lr 0.00002 --visdom --visdom_iters 500 --disp_iters 10 --train_path /path/to/train/set

Inference

The weights of pretrained models can be downloaded from here:

  • ddd --> trained with multi-view supervision (as presented in the paper):
  • ddd_ae --> same model architecture, no multi-view supervision (for comparison purposes)

To denoise a RealSense D415 depth sample using a pretrained model:

python inference.py --model_path /path/to/pretrained/model --input_path /path/to/noisy/sample --output_path /path/to/save/denoised/sample

In order to save the input (noisy) and the output (denoised) samples as pointclouds add the following flag to the inference script execution:

--pointclouds True

To denoise a sample using the pretrained autoencoder (same model trained without splatting) add the following flag to the inference script (and make sure you load the "ddd_ae" model):

--autoencoder True

Benchmarking: the mean inference time on a GeForce GTX 1080 GPU is 11ms.

Citation

If you use this code and/or models, please cite the following:

@inproceedings{sterzentsenko2019denoising,
  author       = "Vladimiros Sterzentsenko and Leonidas Saroglou and Anargyros Chatzitofis and Spyridon Thermos and Nikolaos Zioulis and Alexandros Doumanoglou and Dimitrios Zarpalas and Petros Daras",
  title        = "Self-Supervised Deep Depth Denoising",
  booktitle    = "ICCV",
  year         = "2019"
}

License

Our code is released under MIT License (see LICENSE file for details)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].