All Projects → raluca-scona → staticfusion

raluca-scona / staticfusion

Licence: other
StaticFusion

Programming Languages

C++
36643 projects - #6 most used programming language
GLSL
2045 projects
CMake
9771 projects

Projects that are alternatives of or similar to staticfusion

Dynaslam
DynaSLAM is a SLAM system robust in dynamic environments for monocular, stereo and RGB-D setups
Stars: ✭ 426 (+298.13%)
Mutual labels:  dynamic, slam
Co Fusion
Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects
Stars: ✭ 400 (+273.83%)
Mutual labels:  slam, rgbd
RGB-D-SLAM
Work in Progress. A SLAM implementation based on plane and superquadric tracking.
Stars: ✭ 23 (-78.5%)
Mutual labels:  slam, rgbd
Maskfusion
MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects
Stars: ✭ 404 (+277.57%)
Mutual labels:  slam, rgbd
rgbd ptam
Python implementation of RGBD-PTAM algorithm
Stars: ✭ 65 (-39.25%)
Mutual labels:  slam, rgbd
dvo python
Coding dense visual odometry in a little more than a night (yikes)!
Stars: ✭ 40 (-62.62%)
Mutual labels:  slam, rgbd
Dynamic ORB SLAM2
Visual SLAM system that can identify and exclude dynamic objects.
Stars: ✭ 89 (-16.82%)
Mutual labels:  dynamic, slam
PlaceInvaders
Multiplayer AR game sample
Stars: ✭ 24 (-77.57%)
Mutual labels:  slam
gmmloc
Implementation for IROS2020: "GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Model"
Stars: ✭ 91 (-14.95%)
Mutual labels:  slam
Rotation Coordinate Descent
(CVPR 2020 Oral) A fast global rotation averaging algorithm.
Stars: ✭ 44 (-58.88%)
Mutual labels:  slam
2019-UGRP-DPoom
2019 DGIST DPoom project under UGRP : SBC and RGB-D camera based full autonomous driving system for mobile robot with indoor SLAM
Stars: ✭ 35 (-67.29%)
Mutual labels:  slam
Dynamic-Splitscreen
A simple variation of the commonly used dynamic splitscreen, popularized by the LEGO games.
Stars: ✭ 105 (-1.87%)
Mutual labels:  dynamic
awesome-lidar
😎 Awesome LIDAR list. The list includes LIDAR manufacturers, datasets, point cloud-processing algorithms, point cloud frameworks and simulators.
Stars: ✭ 217 (+102.8%)
Mutual labels:  slam
flutter dynamic
The flutter_dynamic is a library that create flutter application dynamic.
Stars: ✭ 66 (-38.32%)
Mutual labels:  dynamic
instrumentation
Assorted pintools
Stars: ✭ 24 (-77.57%)
Mutual labels:  dynamic
bench ws
A catkin workspace to compare against different state-estimation algorithms namely VINS-Mono, VINS-Fusion, ORBSLAM3, Stereo-MSCKF, etc.
Stars: ✭ 15 (-85.98%)
Mutual labels:  slam
UrbanLoco
UrbanLoco: A Full Sensor Suite Dataset for Mapping and Localization in Urban Scenes
Stars: ✭ 147 (+37.38%)
Mutual labels:  slam
SALSA-Semantic-Assisted-SLAM
SALSA: Semantic Assisted Life-Long SLAM for Indoor Environments (16-833 SLAM Project at CMU)
Stars: ✭ 52 (-51.4%)
Mutual labels:  slam
cocoapods-user-defined-build-types
⚒ A cocoapods plugin that can selectively set build type per pod (static library, dynamic framework, etc.)
Stars: ✭ 91 (-14.95%)
Mutual labels:  dynamic
G2LTex
Code for CVPR 2018 paper --- Texture Mapping for 3D Reconstruction with RGB-D Sensor
Stars: ✭ 104 (-2.8%)
Mutual labels:  slam

StaticFusion

This repository contains StaticFusion, a method for dense RGB-D SLAM in dynamic environments based on a strategy of simultaneous odometry and dynamic object segmentation.

It has been tested on Ubuntu 16.04.

Publication

If you use StaticFusion in your own work, please cite our paper:

  • StaticFusion: Background Reconstruction for Dense RGB-D SLAM in Dynamic Environments. Raluca Scona, Mariano Jaimez, Yvan R. Petillot, Maurice Fallon, Daniel Cremers. IEEE International Conference on Robotics and Automation (ICRA) 2018

Configuration and Dependencies

StaticFusion is organised as a cmake project. The dependencies are:

Install Steps

1. Ubuntu 16.04

Given a clean installation of Ubuntu, the necessary steps for installing StaticFusion are:

  1. Our approach requires a GPU to run, so make sure to install the necessary driver for your graphics card. We only tested the code on Nvidia GPUs so far.

  2. Install the necessary libraries (and any required dependencies):

sudo apt-get install cmake libmrpt-dev freeglut3-dev libglew-dev libopencv-dev libopenni2-dev git

  1. Download and compile Pangolin.

  2. Compile the project, eg:

cd static-fusion 
mkdir build
cd build
cmake ..
make

2. Windows (using CMake and Visual Studio)

  1. For the required dependencies: download pre-compiled binaries or the source code (and build it). Don't forget to add the dirs with the binaries to the PATH of Windows.

  2. Please build Pangolin from source and disable the flag "MSVC_USE_STATIC_CRT" on CMake. This is required because Pangolin uses static linking by default on Windows and we use dynamic linking.

  3. Generate the solution with CMake and compile / run the project you want to try.

Troubleshooting:

  • The pangolin window looks completely white -> Resize it once to force it to show the content.
  • Runtime error "OpenGL Error: XX (1282) In: ...\source\include\pangolin/gl/gl.hpp". We could solve it following stevenlovegrove/Pangolin#149. Aparently some computers will decide to use the integrated GPU when they actually have a better one even if the integrated GPU does not support some of the functionalities used. If you only have an integrated GPU, I guess it could also happen that it does not support some of the operations implemented in our code...

Running Experiments

There are three executables you can run:

1) StaticFusion-Camera: running off of live RGB-D camera feed, assuming an OpenNI compatible camera.

2) StaticFusion-Datasets: running using the TUM/Freiburg RGB-D datasets in Rawlog format.

  • run: ./StaticFusion-Datasets ~/Downloads/rawlog_rgbd_dataset_freiburg1_360/rgbd_dataset_freiburg1_360.rawlog
  • You can download these sequeces here. If you would like to perform quantitative evaluation, set bool save_results = true within the StaticFusion-datasets.cpp file to save the estimated trajectory to file. This trajectory can then be evaluated here: http://vision.in.tum.de/data/datasets/rgbd-dataset/online_evaluation.

2) StaticFusion-ImageSequenceAssoc: running using images stored on disk.

  • run: ./StaticFusion-ImageSeqAssoc ~/Downloads/ball
  • The expected format for images in this case is the same as listed here if one was to use associate.py script to associate color and depth images.

The organisational structure of the dataset should be:

/dataset/rgb/   - folder containing all color images
/dataset/depth/ - folder containing all depth images
/dataset/rgbd_assoc.txt

Where rgbd_assoc.txt contain a list of items of the form:

timestamp1 /rgb/timestamp1.png timestamp2 /depth/timestamp2.png

Should you wish to modify this, the code can be found in the file FrontEnd.cpp, in the methods loadAssoc, loadImageFromSequenceAssoc.

The expected format of the images:

  • color images - 8 bit in PNG. Resolution: VGA
  • depth images - 16 bit monochrome in PNG, scaled by 1000. Resolution: VGA

Example sequences

ball

selfie

All parameters are optimised for QVGA resolution.

ElasticFusion

We credit ElasticFusion as a significant basis for our work.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].