All Projects → TurtleZhong → LVIO-SAM

TurtleZhong / LVIO-SAM

Licence: other
A Multi-sensor Fusion Odometry via Smoothing and Mapping.

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language
CMake
9771 projects
shell
77523 projects

Projects that are alternatives of or similar to LVIO-SAM

Door Slam
Distributed, Online, and Outlier Resilient SLAM for Robotic Teams
Stars: ✭ 107 (-25.17%)
Mutual labels:  mapping, slam
ros-vrep-slam
ROS and V-REP for Robot Mapping and Localization
Stars: ✭ 39 (-72.73%)
Mutual labels:  mapping, slam
Awesome Robotic Tooling
Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
Stars: ✭ 1,876 (+1211.89%)
Mutual labels:  mapping, slam
python-graphslam
Graph SLAM solver in Python
Stars: ✭ 118 (-17.48%)
Mutual labels:  mapping, slam
GA SLAM
🚀 SLAM for autonomous planetary rovers with global localization
Stars: ✭ 40 (-72.03%)
Mutual labels:  mapping, slam
Evo
Python package for the evaluation of odometry and SLAM
Stars: ✭ 1,373 (+860.14%)
Mutual labels:  mapping, slam
Pythonrobotics
Python sample codes for robotics algorithms.
Stars: ✭ 13,934 (+9644.06%)
Mutual labels:  mapping, slam
Loam velodyne
Laser Odometry and Mapping (Loam) is a realtime method for state estimation and mapping using a 3D lidar.
Stars: ✭ 1,135 (+693.71%)
Mutual labels:  mapping, slam
JuliaAutonomy
Julia sample codes for Autonomy, Robotics and Self-Driving Algorithms.
Stars: ✭ 21 (-85.31%)
Mutual labels:  mapping, slam
omnimapper
A Modular Multimodal Mapping Framework
Stars: ✭ 86 (-39.86%)
Mutual labels:  mapping, slam
Eao Slam
[IROS 2020] EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association
Stars: ✭ 95 (-33.57%)
Mutual labels:  mapping, slam
slam gmapping
Slam Gmapping for ROS2
Stars: ✭ 56 (-60.84%)
Mutual labels:  mapping, slam
Mrpt slam
ROS wrappers for SLAM algorithms in MRPT
Stars: ✭ 84 (-41.26%)
Mutual labels:  mapping, slam
Rtabmap
RTAB-Map library and standalone application
Stars: ✭ 1,376 (+862.24%)
Mutual labels:  mapping, slam
Lego Loam
LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
Stars: ✭ 1,138 (+695.8%)
Mutual labels:  mapping, slam
Maplab
An open visual-inertial mapping framework.
Stars: ✭ 1,722 (+1104.2%)
Mutual labels:  mapping, slam
Kimera Vio
Visual Inertial Odometry with SLAM capabilities and 3D Mesh generation.
Stars: ✭ 741 (+418.18%)
Mutual labels:  mapping, slam
Visma
Visual-Inertial-Semantic-MApping Dataset and tools
Stars: ✭ 54 (-62.24%)
Mutual labels:  mapping, slam
Iris lama
LaMa - A Localization and Mapping library
Stars: ✭ 217 (+51.75%)
Mutual labels:  mapping, slam
lt-mapper
A Modular Framework for LiDAR-based Lifelong Mapping
Stars: ✭ 301 (+110.49%)
Mutual labels:  mapping, slam

LVIO-SAM

A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.

  • The code is still being integrated. we will release it in the feature.

LVIO-SAM

A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.

Contributors Forks Stargazers Issues


Logo Logo

LVIO-SAM

A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.
Demo Youtube · Demo Bilibili · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Simulations environment
  3. How to run
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgements

About The Project

   This project is provide a a multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR,stereo camera and inertial measurement unit (IMU) via smoothing and mapping.

!!!Important Notes!!!

Simulations environment

   We modify the Gazebo world proposed in here and adding our own sensors to test our proposed method. We use Husky as the base robot and we modify the urdf. The robot is equiped with A velodyne VLP 16 lidar, stereo camera(640x480) and an IMU (50Hz).

Download the CMU campus model to sim_env/husky_gazebo/mesh/

cd YOUR_WORD_PATH/LVIO_SAM/sim_env/husky_gazebo/mesh/
unzip autonomus_exploration_environments.zip

I guess c campus model it to ~/.gazebo/models/.

cd autonomus_exploration_environments/
cp -r campus ~/.gazebo/models/

you can launch gazebo and find campu model to check if it is OK.

git clone https://github.com/TurtleZhong/LVIO-SAM.git

cd YOUR_PATH/LVIO-SAM
catkin build -DCMAKE_BUILD_TYPE=Release
source devel/setup.bash

roslaunch husky_gazebo husky_campus.launch

It will take a few minutes to load the world. please start a new terminal and launch the husky and sensor model.

roslaunch husky_gazebo spawn_husky.launch

If everything is OK, you will get this:

[Logo]

if you want control the robot, you can use the keyboard i,j,k,l etc.

rosrun teleop_twist_keyboard teleop_twist_keyboard.py

How to run in Docker

   Since our code is still being integrated. we will release it in the feature. But we provide a docker environment for users. So Docker should be correctly installed.

  Step1. Prepare Datasets

  1. KITTI datasets
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0027/2011_09_30_drive_0027_sync.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0027/2011_09_30_drive_0027_extract.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_calib.zip
unzip 2011_09_30_drive_0084_sync.zip
unzip 2011_09_30_drive_0084_extract.zip
unzip 2011_09_30_calib.zip
python kitti2bag.py -t 2011_09_30 -r 0027 raw_synced .

That's it. You have a bag that contains your data.

╰─$ rosbag info kitti_2011_09_30_drive_0027_synced.bag 
path:        kitti_2011_09_30_drive_0027_synced.bag
version:     2.0
duration:    1:55s (115s)
start:       Sep 30 2011 12:40:25.07 (1317357625.07)
end:         Sep 30 2011 12:42:20.41 (1317357740.41)
size:        6.0 GB
messages:    35278
compression: none [4435/4435 chunks]
types:       geometry_msgs/TwistStamped [98d34b0043a2093cf9d9345ab6eef12e]
             sensor_msgs/CameraInfo     [c9a58c1b0b154e0e6da7578cb991d214]
             sensor_msgs/Image          [060021388200f6f0f447d0fcd9c64743]
             sensor_msgs/Imu            [6a62c6daae103f4ff57a132d6f95cec2]
             sensor_msgs/NavSatFix      [2d3a8cd499b9b4a0249fb98fd05cfa48]
             sensor_msgs/PointCloud2    [1158d486dd51d683ce2f1be655c3c181]
topics:      /gps/fix                                 1106 msgs    : sensor_msgs/NavSatFix     
             /gps/vel                                 1106 msgs    : geometry_msgs/TwistStamped
             /imu_correct                            11556 msgs    : sensor_msgs/Imu           
             /imu_raw                                11556 msgs    : sensor_msgs/Imu           
             /kitti/camera_color_left/camera_info     1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_color_left/image_raw       1106 msgs    : sensor_msgs/Image         
             /kitti/camera_color_right/camera_info    1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_color_right/image_raw      1106 msgs    : sensor_msgs/Image         
             /kitti/camera_gray_left/camera_info      1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_gray_left/image_raw        1106 msgs    : sensor_msgs/Image         
             /kitti/camera_gray_right/camera_info     1106 msgs    : sensor_msgs/CameraInfo    
             /kitti/camera_gray_right/image_raw       1106 msgs    : sensor_msgs/Image         
             /points_raw                              1106 msgs    : sensor_msgs/PointCloud2

Other source files can be found at KITTI raw data page.

  1. sim_env datasets

You can record datasets from our simulation environments or download the sample dataset from BaiduYun Link, the extract code is f8to.

  Get docker images and create your own datasets..

docker pull xinliangzhong/ubuntu-18.04-novnc-lvio-sam:v1

use docker images check the image is ok.

docker run -it --rm -p 8080:80 xinliangzhong/ubuntu-18.04-novnc-lvio-sam:v1

then open the Chrome browser and type http://127.0.0.1:8080/

open 3 terminal and run

cd /root
source .bashrc
cd work/ws_lvio/
source devel/setup.bash

roslaunch husky_gazebo husky_campus.launch
roslaunch husky_gazebo husky_campus.launch

It will take a few minutes to load the world. please start a new terminal and launch the husky and sensor model.

roslaunch husky_gazebo spawn_husky.launch

  

roslaunch husky_viz view_robot.launch

If everything is OK, you will get this in your chrome browser:

[Logo]

Run LVIO-SAM in docker

Follow the above steps to get the docker image, and open it in browser:

[Logo]

cd /root
source .bashrc
cd work/ws_lvio/
source devel/setup.bash

roslaunch lvio_sam run_kitti_debug_test_vo_between_factor.launch #for kitti dataset.
roslaunch lvio_sam run_kitti_debug_test_vo_between_factor.launch #for sim dataset.

we prepare 2 sample bag in the docker, you can use it directly.

rosbag play kitti_2011_09_30_drive_0027_synced.bag --pause --clock #for kitti dataset.
rosbag play 2021-08-04-09-49-56.bag --pause --clock #for sim dataset.

If everything is OK, you will get this in your chrome browser:

[Logo]

Roadmap

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License.

Contact

Xinliang Zhong - @zxl - [email protected]

Project Link: https://github.com/TurtleZhong/LVIO-SAM

Citation

@inproceedings{zhong2021lvio,
  title={LVIO-SAM: A Multi-sensor Fusion Odometry via Smoothing and Mapping},
  author={Zhong, Xinliang and Li, Yuehua and Zhu, Shiqiang and Chen, Wenxuan and Li, Xiaoqian and Gu, Jason},
  booktitle={2021 IEEE International Conference on Robotics and Biomimetics (ROBIO)},
  pages={440--445},
  year={2021},
  organization={IEEE}
}

Acknowledgements

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].