All Projects → heethesh → SALSA-Semantic-Assisted-SLAM

heethesh / SALSA-Semantic-Assisted-SLAM

Licence: BSD-3-Clause License
SALSA: Semantic Assisted Life-Long SLAM for Indoor Environments (16-833 SLAM Project at CMU)

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to SALSA-Semantic-Assisted-SLAM

li slam ros2
ROS2 package of tightly-coupled lidar inertial ndt/gicp slam
Stars: ✭ 160 (+207.69%)
Mutual labels:  slam, loop-closure
VIDO-SLAM
VIDO-SLAM is a Visual Inertial SLAM system for dynamic environments, and it can also estimate dynamic objects motion and track objects.
Stars: ✭ 95 (+82.69%)
Mutual labels:  slam, semantic-slam
Tonic
An autonomous vehicle written in python
Stars: ✭ 85 (+63.46%)
Mutual labels:  slam, orb-slam2
VI ORB SLAM2
Monocular/Stereo Visual-Inertial ORB-SLAM based on ORB-SLAM2
Stars: ✭ 152 (+192.31%)
Mutual labels:  slam, orb-slam2
DSP-SLAM
[3DV 2021] DSP-SLAM: Object Oriented SLAM with Deep Shape Priors
Stars: ✭ 377 (+625%)
Mutual labels:  slam, semantic-slam
rgbd ptam
Python implementation of RGBD-PTAM algorithm
Stars: ✭ 65 (+25%)
Mutual labels:  slam, loop-closure
ndt map
SLAM package using NDT registration library of Autoware with loop-closure detection (odometry based) referenced from lego_loam.
Stars: ✭ 115 (+121.15%)
Mutual labels:  slam, loop-closure
DeepFashion MRCNN
Fashion Item segmentation with Mask_RCNN
Stars: ✭ 29 (-44.23%)
Mutual labels:  mask-rcnn
Rotation Coordinate Descent
(CVPR 2020 Oral) A fast global rotation averaging algorithm.
Stars: ✭ 44 (-15.38%)
Mutual labels:  slam
slam-python
SLAM - Simultaneous localization and mapping using OpenCV and NumPy.
Stars: ✭ 80 (+53.85%)
Mutual labels:  slam
vslam research
this repo is for visual slam research
Stars: ✭ 22 (-57.69%)
Mutual labels:  slam
mp2p icp
Multi primitive-to-primitive (MP2P) ICP algorithms in C++
Stars: ✭ 84 (+61.54%)
Mutual labels:  slam
PlaceInvaders
Multiplayer AR game sample
Stars: ✭ 24 (-53.85%)
Mutual labels:  slam
UnderTheSea
Fish instance segmentation using Mask-RCNN
Stars: ✭ 30 (-42.31%)
Mutual labels:  mask-rcnn
gmmloc
Implementation for IROS2020: "GMMLoc: Structure Consistent Visual Localization with Gaussian Mixture Model"
Stars: ✭ 91 (+75%)
Mutual labels:  slam
M2DGR
M2DGR: a Multi-modal and Multi-scenario Dataset for Ground Robots
Stars: ✭ 238 (+357.69%)
Mutual labels:  slam
Slam-Dunk-Android
Android implementation of "Fusion of inertial and visual measurements for rgb-d slam on mobile devices"
Stars: ✭ 25 (-51.92%)
Mutual labels:  slam
G2LTex
Code for CVPR 2018 paper --- Texture Mapping for 3D Reconstruction with RGB-D Sensor
Stars: ✭ 104 (+100%)
Mutual labels:  slam
2019-UGRP-DPoom
2019 DGIST DPoom project under UGRP : SBC and RGB-D camera based full autonomous driving system for mobile robot with indoor SLAM
Stars: ✭ 35 (-32.69%)
Mutual labels:  slam
direct lidar odometry
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds
Stars: ✭ 202 (+288.46%)
Mutual labels:  slam

ROS Distro: Melodic CI License: BSD

SALSA: Semantic Assisted Life-Long SLAM for Indoor Environments

We propose a learning augmented lifelong SLAM method for indoor environments. Most of the existing SLAM methods assume a static environment and disregard dynamic objects. Another problem is that most feature and semantic based SLAM methods fail in repetitive environments. The unexpected changes of surroundings corrupts the quality of the tracking and leads to system failure. This project aims to use learning methods to classify landmarks and objects as dynamic and/or repeatable in nature to better handle optimization, achieve robust performance in a changing environment, and to re-localize in a lifelong-setting. We propose using semantic information and assigning scores to object feature points based on their probability to be dynamic and/or repeatable. We update the front-end, optimization cost functions, and BOW feature generation for loop closures from the original ORB-SLAM2 pipeline. Please see our paper for more details.

All 80 classes from the COCO dataset are assigned scores. This list is only a subset of some interesting objects. Some objects are exclusively labelled as dynamic objects.

 
 

Mask-RCNN was used to segment object instances and the scores were mapped according to the scale shown on the right. The left column shows score maps for the OpenLORIS-Scene [3] cafe1-2 sequence and the right column shows the score maps for TUM-RGBD walking_static sequence.

Setup and Usage

The current version does not perform segmentation online, the score maps are pre-computed using Detectron 2's Mask RCNN network and appended into the ROS bag file (using scripts/update_rosbag.py). See USAGE.md for more details on setting up and running this pipeline.

catkin build
roslaunch orb_slam_2_ros salsa.launch

Please see our report for the changes that we have made in the pipeline. We have tested on monocular camera data, RGB-D camera is still experimental and might work. Stereo mode is supported in the back-end optimization but the front-end (feature culling and ROS node integration) is not fully supported yet.

Results

RMSE Absolute Trajectory Error (ATE) on TUM-RGBD Dataset

Sequences ORB-SLAM2 [1] DS-SLAM [2] SALSA (Ours)
walking_static 0.4030m 0.0081m 0.0059m
walking_xyz 0.1780m 0.0247m 0.0521m

RMSE Relative Pose Error (RPE) on TUM-RGBD Dataset

Sequences ORB-SLAM2 [1] DS-SLAM [2] SALSA (Ours)
walking_static 0.2162m 0.0102 0.00829m
walking_xyz 0.4124m 0.0333 0.02951m

RMSE Absolute Trajectory Error (ATE) on OpenLORIS-Scene [3] Dataset

Sequences ORB-SLAM2 [1] SALSA (Ours)
cafe1_1 0.0777m 0.0464m
cafe1_2 0.0813m 0.0588m

Results of tracking on OpenLORIS-Scene [3] cafe1-1 sequence, TUM-RGBD walking_static and walking_xyz sequences. The columns depict the ground truth trajectory, estimated trajectory results, and tracking and mapping results visualized in Rviz.

Videos

TUM-RGBD Sequences: YouTube Link
OpenLORIS-Scene Sequences: YouTube Link.

Dynamic feature points in the red mask region are removed. Possibly dynamic and repeatable feature points have a shade a green/blue and rest of the static object features points are labelled in yellow.

Citation

Please cite our work if you use SALSA.

@unpublished{SALSA2020,
    author    = {Ayush Jhalani and Heethesh Vhavle and Sachit Mahajan},
    title     = {{SALSA}: Semantic Assisted Life-Long {SLAM} for Indoor Environments},
    note      = {16-833 Robot Localization and Mapping (Spring 2020) Final Project at Carnegie Mellon University},
    month     = "May",
    year      = "2020"
}

References

[1] R. Mur-Artal and J. D. Tardos, "Orb-slam2: An open-source slam" system for monocular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.

[2] C. Yu, Z. Liu, X. Liu, F. Xie, Y. Yang, Q. Wei, and Q. Fei, "Dsslam: A semantic visual slam towards dynamic environments," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1168–1174.

3] X. Shi, D. Li, P. Zhao, Q. Tian, Y. Tian, Q. Long, C. Zhu, J. Song, F. Qiao, L. Song, Y. Guo, Z. Wang, Y. Zhang, B. Qin, W. Yang, F. Wang, R. H. M. Chan, and Q. She, "Are we ready for service robots? the openloris-scene datasets for lifelong slam," 2019. Dataset Link

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].