All Projects → silviutroscot → Codeslam

silviutroscot / Codeslam

Implementation of CodeSLAM — Learning a Compact, Optimisable Representation for Dense Visual SLAM paper (https://arxiv.org/pdf/1804.00874.pdf)

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Codeslam

shared-latent-space
Shared Latent Space VAE's
Stars: ✭ 15 (-76.56%)
Mutual labels:  autoencoder, variational-autoencoder
AutoEncoders
Variational autoencoder, denoising autoencoder and other variations of autoencoders implementation in keras
Stars: ✭ 14 (-78.12%)
Mutual labels:  autoencoder, variational-autoencoder
vae-pytorch
AE and VAE Playground in PyTorch
Stars: ✭ 53 (-17.19%)
Mutual labels:  autoencoder, variational-autoencoder
Link Prediction
Representation learning for link prediction within social networks
Stars: ✭ 245 (+282.81%)
Mutual labels:  autoencoder, representation-learning
Awesome Vaes
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
Stars: ✭ 418 (+553.13%)
Mutual labels:  representation-learning, variational-autoencoder
DESOM
🌐 Deep Embedded Self-Organizing Map: Joint Representation Learning and Self-Organization
Stars: ✭ 76 (+18.75%)
Mutual labels:  autoencoder, representation-learning
keras-adversarial-autoencoders
Experiments with Adversarial Autoencoders using Keras
Stars: ✭ 20 (-68.75%)
Mutual labels:  autoencoder, variational-autoencoder
Tybalt
Training and evaluating a variational autoencoder for pan-cancer gene expression data
Stars: ✭ 126 (+96.88%)
Mutual labels:  autoencoder, variational-autoencoder
Disentangling Vae
Experiments for understanding disentanglement in VAE latent representations
Stars: ✭ 398 (+521.88%)
Mutual labels:  representation-learning, variational-autoencoder
calc2.0
CALC2.0: Combining Appearance, Semantic and Geometric Information for Robust and Efficient Visual Loop Closure
Stars: ✭ 70 (+9.38%)
Mutual labels:  slam, variational-autoencoder
Focal Frequency Loss
Focal Frequency Loss for Generative Models
Stars: ✭ 141 (+120.31%)
Mutual labels:  autoencoder, variational-autoencoder
Neurec
Next RecSys Library
Stars: ✭ 731 (+1042.19%)
Mutual labels:  autoencoder, variational-autoencoder
Tensorflow Mnist Cvae
Tensorflow implementation of conditional variational auto-encoder for MNIST
Stars: ✭ 139 (+117.19%)
Mutual labels:  autoencoder, variational-autoencoder
autoencoders tensorflow
Automatic feature engineering using deep learning and Bayesian inference using TensorFlow.
Stars: ✭ 66 (+3.13%)
Mutual labels:  autoencoder, representation-learning
Kate
Code & data accompanying the KDD 2017 paper "KATE: K-Competitive Autoencoder for Text"
Stars: ✭ 135 (+110.94%)
Mutual labels:  autoencoder, representation-learning
haskell-vae
Learning about Haskell with Variational Autoencoders
Stars: ✭ 18 (-71.87%)
Mutual labels:  autoencoder, variational-autoencoder
Rectorch
rectorch is a pytorch-based framework for state-of-the-art top-N recommendation
Stars: ✭ 121 (+89.06%)
Mutual labels:  autoencoder, variational-autoencoder
Srl Zoo
State Representation Learning (SRL) zoo with PyTorch - Part of S-RL Toolbox
Stars: ✭ 125 (+95.31%)
Mutual labels:  autoencoder, representation-learning
srVAE
VAE with RealNVP prior and Super-Resolution VAE in PyTorch. Code release for https://arxiv.org/abs/2006.05218.
Stars: ✭ 56 (-12.5%)
Mutual labels:  representation-learning, variational-autoencoder
Tensorflow Mnist Vae
Tensorflow implementation of variational auto-encoder for MNIST
Stars: ✭ 422 (+559.38%)
Mutual labels:  autoencoder, variational-autoencoder

CodeSLAM

PyTorch implementation of CodeSLAM - Learning a Compact, Optimisable Representation for Dense Visual SLAM.

Summary

Problems it tries to tackle/solve

  • Representation of geometry in real 3D perception systems.
    • Dense representations, possibly augmented with semantic labels are high dimensional and unsuitable for probabilistic inference.
    • Sparse representations, which avoid these problems but capture only partial scene information.

The new approach/solution

  • New compact but dense representation of scene geometry, conditioned on the intensity data from a single image and generated from a code consisting of a small number of parameters.
  • Each keyframe can produce a depth map, but the code can be optimised jointly with pose variables and with the codes of overlapping keyframes, for global consistency.

Introduction

  • As the uncertainty propagation quickly becomes intractable for large degrees of freedom, the approaches on SLAM are split into 2 categories:
    • sparse SLAM, representing geometry by a sparse set of features
    • dense SLAM, that attempts to retrieve a more complete description of the environment.
  • The geometry of natural scenes exhibits a high degree of order, so we may not need a large number of params to represent it.
  • Besides that, a scene could be decomposed into a set of semantic objects (e.g a chair) together with some internal params (e.g. size of chair, no of legs) and a pose. Other more general scene elements, which exhibit simple regularity, can be recognised and parametrised within SLAM systems.
  • A straightforward AE might oversimplify the reconstruction of natural scenes, the novelty is to condition the training on intensity images.
  • A scene map consists of a set of selected and estimated historical camera poses together with the corresponding captured images and supplementary local information such as depth estimates. The intensity images are usually required for additional tasks.
  • Depth map estimate becomes a function of corresponding intensity image and an unknown compact representation (referred to as code).
  • We can think of the image providing local details and the code supplying more global shape params and can be seen as a step towards enabling optimisation in general semantic space.
  • The 2 key contributions of this paper are:
    • The derivation of a compact and optimisable representation of dense geometry by conditioning a depth autoencoder on intensity images.
    • The implementation of the first real-time targeted monocular system that achieves such a tight joint optimisation of motion and dense geometry.

Usage

  • generate the python module for the protobuf: protoc --python_out=./ scenenet.proto

Results

Requirements

  • Python 3.4+
  • PyTorch 1.0+
  • Torchvision 0.4.0+
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].