All Projects → PRBonn → Semantic_suma

PRBonn / Semantic_suma

Licence: mit
SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)

Projects that are alternatives of or similar to Semantic suma

Sc Lego Loam
LiDAR SLAM: Scan Context + LeGO-LOAM
Stars: ✭ 332 (-22.97%)
Mutual labels:  slam, lidar
opensource slam noted
open source slam system notes
Stars: ✭ 119 (-72.39%)
Mutual labels:  lidar, slam
puma
Poisson Surface Reconstruction for LiDAR Odometry and Mapping
Stars: ✭ 302 (-29.93%)
Mutual labels:  lidar, slam
DSP-SLAM
[3DV 2021] DSP-SLAM: Object Oriented SLAM with Deep Shape Priors
Stars: ✭ 377 (-12.53%)
Mutual labels:  lidar, slam
awesome-lidar
😎 Awesome LIDAR list. The list includes LIDAR manufacturers, datasets, point cloud-processing algorithms, point cloud frameworks and simulators.
Stars: ✭ 217 (-49.65%)
Mutual labels:  lidar, slam
slam docker collection
A collection of docker environments for 3D SLAM packages
Stars: ✭ 130 (-69.84%)
Mutual labels:  lidar, slam
ndtpso slam
ROS package for NDT-PSO, a 2D Laser scan matching algorithm for SLAM
Stars: ✭ 32 (-92.58%)
Mutual labels:  lidar, slam
Staticmapping
Use LiDAR to map the static world
Stars: ✭ 191 (-55.68%)
Mutual labels:  slam, lidar
direct lidar odometry
Direct LiDAR Odometry: Fast Localization with Dense Point Clouds
Stars: ✭ 202 (-53.13%)
Mutual labels:  lidar, slam
FAST LIO SLAM
LiDAR SLAM = FAST-LIO + Scan Context
Stars: ✭ 183 (-57.54%)
Mutual labels:  lidar, slam
SLAM-application
LeGO-LOAM, LIO-SAM, LVI-SAM, FAST-LIO2, Faster-LIO, VoxelMap, R3LIVE application and comparison on Gazebo and real-world datasets. Installation and config files are provided.
Stars: ✭ 258 (-40.14%)
Mutual labels:  lidar, slam
Interactive slam
Interactive Map Correction for 3D Graph SLAM
Stars: ✭ 372 (-13.69%)
Mutual labels:  slam, lidar
li slam ros2
ROS2 package of tightly-coupled lidar inertial ndt/gicp slam
Stars: ✭ 160 (-62.88%)
Mutual labels:  lidar, slam
lt-mapper
A Modular Framework for LiDAR-based Lifelong Mapping
Stars: ✭ 301 (-30.16%)
Mutual labels:  lidar, slam
Mola
A Modular Optimization framework for Localization and mApping (MOLA)
Stars: ✭ 206 (-52.2%)
Mutual labels:  slam, lidar
mola-fe-lidar
MOLA module: Front-end for point-cloud sensors based on generic ICP algorithms. LiDAR odometry and loop closure.
Stars: ✭ 16 (-96.29%)
Mutual labels:  lidar, slam
Pyicp Slam
Full-python LiDAR SLAM using ICP and Scan Context
Stars: ✭ 155 (-64.04%)
Mutual labels:  slam, lidar
Recent slam research
Track Advancement of SLAM 跟踪SLAM前沿动态【2021 version】
Stars: ✭ 2,387 (+453.83%)
Mutual labels:  slam, semantic
SLAM Qt
My small SLAM simulator to study "SLAM for dummies"
Stars: ✭ 47 (-89.1%)
Mutual labels:  lidar, slam
UrbanLoco
UrbanLoco: A Full Sensor Suite Dataset for Mapping and Localization in Urban Scenes
Stars: ✭ 147 (-65.89%)
Mutual labels:  lidar, slam

SuMa++: Efficient LiDAR-based Semantic SLAM

This repository contains the implementation of SuMa++, which generates semantic maps only using three-dimensional laser range scans.

Developed by Xieyuanli Chen and Jens Behley.

SuMa++ is built upon SuMa and RangeNet++. For more details, we refer to the original project websites SuMa and RangeNet++.

An example of using SuMa++: ptcl

Publication

If you use our implementation in your academic work, please cite the corresponding paper:

@inproceedings{chen2019iros, 
		author = {X. Chen and A. Milioto and E. Palazzolo and P. Giguère and J. Behley and C. Stachniss},
		title  = {{SuMa++: Efficient LiDAR-based Semantic SLAM}},
		booktitle = {Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS)},
		year = {2019},
		codeurl = {https://github.com/PRBonn/semantic_suma/},
		videourl = {https://youtu.be/uo3ZuLuFAzk},
}

Dependencies

  • catkin
  • Qt5 >= 5.2.1
  • OpenGL >= 4.0
  • libEigen >= 3.2
  • gtsam >= 4.0 (tested with 4.0.0-alpha2)

In Ubuntu 16.04: Installing all dependencies should be accomplished by

sudo apt-get install build-essential cmake libgtest-dev libeigen3-dev libboost-all-dev qtbase5-dev libglew-dev libqt5libqgtk2 catkin

Additionally, make sure you have catkin-tools and the fetch verb installed:

sudo apt install python-pip
sudo pip install catkin_tools catkin_tools_fetch empy

Build

rangenet_lib

To use SuMa++, you need to first build the rangenet_lib with the TensorRT and C++ interface. For more details about building and using rangenet_lib you could find in rangenet_lib.

SuMa++

Clone the repository in the src directory of the same catkin workspace where you built the rangenet_lib:

git clone https://github.com/PRBonn/semantic_suma.git

Download the additional dependencies (or clone glow into your catkin workspace src yourself):

catkin deps fetch

For the first setup of your workspace containing this project, you need:

catkin build --save-config -i --cmake-args -DCMAKE_BUILD_TYPE=Release -DOPENGL_VERSION=430 -DENABLE_NVIDIA_EXT=YES

Where you have to set OPENGL_VERSION to the supported OpenGL core profile version of your system, which you can query as follows:

$ glxinfo | grep "version"
server glx version string: 1.4
client glx version string: 1.4
GLX version: 1.4
OpenGL core profile version string: 4.3.0 NVIDIA 367.44
OpenGL core profile shading language version string: 4.30 NVIDIA [...]
OpenGL version string: 4.5.0 NVIDIA 367.44
OpenGL shading language version string: 4.50 NVIDIA

Here the line OpenGL core profile version string: 4.3.0 NVIDIA 367.44 is important and therefore you should use -DOPENGL_VERSION = 430. If you are unsure you can also leave it on the default version 330, which should be supported by all OpenGL-capable devices.

If you have a NVIDIA device, like a Geforce or Quadro graphics card, you should also activate the NVIDIA extensions using -DENABLE_NVIDIA_EXT=YES for info about the current GPU memory usage of the program.

After this setup steps, you can build with catkin build, since the configuration has been saved to your current Catkin profile (therefore, --save-config was needed).

Now the project root directory (e.g. ~/catkin_ws/src/semantic_suma) should contain a bin directory containing the visualizer.

How to run and use it?

Important Notice

  • Before running SuMa++, you need to first build the rangenet_lib and download the pretrained model.
  • You need to specify the model path in the configuration file in the config/ folder.
  • For the first time using, rangenet_lib will take several minutes to build a .trt model for SuMa++.
  • SuMa++ now can only work with KITTI dataset.

All binaries are copied to the bin directory of the source folder of the project. Thus,

  1. run visualizer in the bin directory by ./visualizer,
  2. open a Velodyne directory from the KITTI Visual Odometry Benchmark and select a ".bin" file,
  3. start the processing of the scans via the "play button" in the GUI.

License

Copyright 2019, Xieyuanli Chen, Jens Behley, Cyrill Stachniss, Photogrammetry and Robotics Lab, University of Bonn.

This project is free software made available under the MIT License. For details see the LICENSE file.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].