All Projects → shichaoy → Cube_slam

shichaoy / Cube_slam

Licence: other
CubeSLAM: Monocular 3D Object Detection and SLAM

Labels

Projects that are alternatives of or similar to Cube slam

Semantic slam
Real time semantic slam in ROS with a hand held RGB-D camera
Stars: ✭ 317 (-31.68%)
Mutual labels:  slam
Vio Doc
主流VIO论文推导及代码解析
Stars: ✭ 385 (-17.03%)
Mutual labels:  slam
Dynaslam
DynaSLAM is a SLAM system robust in dynamic environments for monocular, stereo and RGB-D setups
Stars: ✭ 426 (-8.19%)
Mutual labels:  slam
Zed Examples
ZED SDK Example projects
Stars: ✭ 330 (-28.88%)
Mutual labels:  slam
Interactive slam
Interactive Map Correction for 3D Graph SLAM
Stars: ✭ 372 (-19.83%)
Mutual labels:  slam
Recent Stars 2021
🔥🔥🔥SLAM, Pose/Object tracking, Depth/Disparity/Flow Estimation, 3D-graphic, etc. related papers and code
Stars: ✭ 393 (-15.3%)
Mutual labels:  slam
Deep Learning Localization Mapping
A collection of deep learning based localization models
Stars: ✭ 300 (-35.34%)
Mutual labels:  slam
Dynslam
Master's Thesis on Simultaneous Localization and Mapping in dynamic environments. Separately reconstructs both the static environment and the dynamic objects from it, such as cars.
Stars: ✭ 446 (-3.88%)
Mutual labels:  slam
Awesome Image Registration
image registration related books, papers, videos, and toolboxes
Stars: ✭ 380 (-18.1%)
Mutual labels:  slam
Prometheus
Open source software for autonomous drones.
Stars: ✭ 406 (-12.5%)
Mutual labels:  slam
Sparse To Dense.pytorch
ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (PyTorch Implementation)
Stars: ✭ 334 (-28.02%)
Mutual labels:  slam
G2opy
Python binding of SLAM graph optimization framework g2o
Stars: ✭ 360 (-22.41%)
Mutual labels:  slam
Co Fusion
Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects
Stars: ✭ 400 (-13.79%)
Mutual labels:  slam
Floam
Fast LOAM: Fast and Optimized Lidar Odometry And Mapping for indoor/outdoor localization (Lidar SLAM)
Stars: ✭ 326 (-29.74%)
Mutual labels:  slam
Semantic suma
SuMa++: Efficient LiDAR-based Semantic SLAM (Chen et al IROS 2019)
Stars: ✭ 431 (-7.11%)
Mutual labels:  slam
Suma
Surfel-based Mapping for 3d Laser Range Data (SuMa)
Stars: ✭ 314 (-32.33%)
Mutual labels:  slam
Comma2k19
A driving dataset for the development and validation of fused pose estimators and mapping algorithms
Stars: ✭ 391 (-15.73%)
Mutual labels:  slam
Loam noted
loam code noted in Chinese(loam中文注解版)
Stars: ✭ 455 (-1.94%)
Mutual labels:  slam
Icra2020 Paper List
ICRA2020 paperlist by paopaorobot
Stars: ✭ 432 (-6.9%)
Mutual labels:  slam
Maskfusion
MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects
Stars: ✭ 404 (-12.93%)
Mutual labels:  slam

Cube SLAM

This code contains two mode:

  1. object SLAM integrated with ORB SLAM. See orb_object_slam Online SLAM with ros bag input. It reads the offline detected 3D object.
  2. Basic implementation for Cube only SLAM. See object_slam Given RGB and 2D object detection, the algorithm detects 3D cuboids from each frame then formulate an object SLAM to optimize both camera pose and cuboid poses. is main package. detect_3d_cuboid is the C++ version of single image cuboid detection, corresponding to a matlab version.

Authors: Shichao Yang

Related Paper:

  • CubeSLAM: Monocular 3D Object SLAM, IEEE Transactions on Robotics 2019, S. Yang, S. Scherer PDF

If you use the code in your research work, please cite the above paper. Feel free to contact the authors if you have any further questions.

Installation

Prerequisites

This code contains several ros packages. We test it in ROS indigo/kinetic, Ubuntu 14.04/16.04, Opencv 2/3. Create or use existing a ros workspace.

mkdir -p ~/cubeslam_ws/src
cd ~/cubeslam_ws/src
catkin_init_workspace
git clone [email protected]:shichaoy/cube_slam.git
cd cube_slam

Compile dependency g2o

sh install_dependenices.sh

Compile

cd ~/cubeslam_ws
catkin_make -j4

Running

source devel/setup.bash
roslaunch object_slam object_slam_example.launch

You will see results in Rviz. Default rviz file is for ros indigo. A kinetic version is also provided.

To run orb-object SLAM in folder orb_object_slam, download data. See correct path in mono.launch, then run following in two terminal:

roslaunch orb_object_slam mono.launch
rosbag play mono.bag --clock -r 0.5

To run dynamic orb-object SLAM mentioned in the paper, download data. Similar to above, set correct path in mono_dynamic.launch, then run the launch file with bag file.

If compiling problems met, please refer to ORB_SLAM.

Notes

  1. For the online orb object SLAM, we simply read the offline detected 3D object txt in each image. Many other deep learning based 3D detection can also be used similarly especially in KITTI data.

  2. In the launch file (object_slam_example.launch), if online_detect_mode=false, it requires the matlab saved cuboid images, cuboid pose txts and camera pose txts. if true, it reads the 2D object bounding box txt then online detects 3D cuboids poses using C++.

  3. object_slam/data/ contains all the preprocessing data. depth_imgs/ is just for visualization. pred_3d_obj_overview/ is the offline matlab cuboid detection images. detect_cuboids_saved.txt is the offline cuboid poses in local ground frame, in the format "3D position, 1D yaw, 3D scale, score". pop_cam_poses_saved.txt is the camera poses to generate offline cuboids (camera x/y/yaw = 0, truth camera roll/pitch/height) truth_cam_poses.txt is mainly used for visulization and comparison.

    filter_2d_obj_txts/ is the 2D object bounding box txt. We use Yolo to detect 2D objects. Other similar methods can also be used. preprocessing/2D_object_detect is our prediction code to save images and txts. Sometimes there might be overlapping box of the same object instance. We need to filter and clean some detections. See the filter_match_2d_boxes.m in our matlab detection package.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].