All Projects â†’ PRBonn â†’ Depth_clustering

PRBonn / Depth_clustering

Licence: mit
🚕 Fast and robust clustering of point clouds generated with a Velodyne sensor.

Projects that are alternatives of or similar to Depth clustering

Lidar camera calibration
Light-weight camera LiDAR calibration package for ROS using OpenCV and PCL (PnP + LM optimization)
Stars: ✭ 133 (-79.76%)
Mutual labels:  ros, point-cloud, lidar, pcl
Superpoint graph
Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs
Stars: ✭ 533 (-18.87%)
Mutual labels:  point-cloud, lidar, segmentation, clustering
Point Cloud Filter
Scripts showcasing filtering techniques applied to point cloud data.
Stars: ✭ 34 (-94.82%)
Mutual labels:  ros, point-cloud, pcl
Loam velodyne
Laser Odometry and Mapping (Loam) is a realtime method for state estimation and mapping using a 3D lidar.
Stars: ✭ 1,135 (+72.75%)
Mutual labels:  ros, lidar, pcl
Interactive slam
Interactive Map Correction for 3D Graph SLAM
Stars: ✭ 372 (-43.38%)
Mutual labels:  ros, point-cloud, lidar
Pclpy
Python bindings for the Point Cloud Library (PCL)
Stars: ✭ 212 (-67.73%)
Mutual labels:  point-cloud, lidar, pcl
Lidar camera calibration
ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"
Stars: ✭ 734 (+11.72%)
Mutual labels:  ros, point-cloud, lidar
Hdl localization
Real-time 3D localization using a (velodyne) 3D LIDAR
Stars: ✭ 332 (-49.47%)
Mutual labels:  ros, lidar, real-time
Awesome Robotic Tooling
Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
Stars: ✭ 1,876 (+185.54%)
Mutual labels:  ros, point-cloud, lidar
Adaptive clustering
Lightweight and Accurate Point Cloud Clustering
Stars: ✭ 125 (-80.97%)
Mutual labels:  ros, point-cloud, clustering
cloud to map
Algorithm that converts point cloud data into an occupancy grid
Stars: ✭ 26 (-96.04%)
Mutual labels:  point-cloud, ros, pcl
Ndt omp
Multi-threaded and SSE friendly NDT algorithm
Stars: ✭ 291 (-55.71%)
Mutual labels:  ros, point-cloud, pcl
Vision3d
Research platform for 3D object detection in PyTorch.
Stars: ✭ 177 (-73.06%)
Mutual labels:  point-cloud, lidar, real-time
Hdl graph slam
3D LIDAR-based Graph SLAM
Stars: ✭ 945 (+43.84%)
Mutual labels:  ros, point-cloud, lidar
Pcl Ros Cluster Segmentation
Cluster based segmentation of Point Cloud with PCL lib in ROS
Stars: ✭ 123 (-81.28%)
Mutual labels:  ros, point-cloud, pcl
urban road filter
Real-time LIDAR-based Urban Road and Sidewalk detection for Autonomous Vehicles 🚗
Stars: ✭ 134 (-79.6%)
Mutual labels:  point-cloud, ros, lidar
Cilantro
A lean C++ library for working with point cloud data
Stars: ✭ 577 (-12.18%)
Mutual labels:  point-cloud, segmentation, clustering
Apc Vision Toolbox
MIT-Princeton Vision Toolbox for the Amazon Picking Challenge 2016 - RGB-D ConvNet-based object segmentation and 6D object pose estimation.
Stars: ✭ 277 (-57.84%)
Mutual labels:  ros, segmentation
Pointnet
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Stars: ✭ 3,517 (+435.31%)
Mutual labels:  point-cloud, segmentation
Fast gicp
A collection of GICP-based fast point cloud registration algorithms
Stars: ✭ 307 (-53.27%)
Mutual labels:  point-cloud, pcl

Depth Clustering

Build Status Codacy Badge Coverage Status

This is a fast and robust algorithm to segment point clouds taken with Velodyne sensor into objects. It works with all available Velodyne sensors, i.e. 16, 32 and 64 beam ones.

Check out a video that shows all objects outlined in orange: Segmentation illustration

Prerequisites

I recommend using a virtual environment in your catkin workspace (<catkin_ws> in this readme) and will assume that you have it set up throughout this readme. Please update your commands accordingly if needed. I will be using pipenv that you can install with pip.

Set up workspace and catkin

Regardless of your system you will need to do the following steps:

cd <catkin_ws>            # navigate to the workspace
pipenv shell --fancy      # start a virtual environment
pip install catkin-tools  # install catkin-tools for building
mkdir src                 # create src dir if you don't have it already
# Now you just need to clone the repo:
git clone https://github.com/PRBonn/depth_clustering src/depth_clustering

System requirements

You will need OpenCV, QGLViewer, FreeGLUT, QT4 or QT5 and optionally PCL and/or ROS. The following sections contain an installation command for various Ubuntu systems (click folds to expand):

Ubuntu 14.04

Install these packages:

sudo apt install libopencv-dev libqglviewer-dev freeglut3-dev libqt4-dev
Ubuntu 16.04

Install these packages:

sudo apt install libopencv-dev libqglviewer-dev freeglut3-dev libqt5-dev
Ubuntu 18.04

Install these packages:

sudo apt install libopencv-dev libqglviewer-dev-qt5 freeglut3-dev qtbase5-dev 

Optional requirements

If you want to use PCL clouds and/or use ROS for data acquisition you can install the following:

  • (optional) PCL - needed for saving clouds to disk
  • (optional) ROS - needed for subscribing to topics

How to build?

This is a catkin package. So we assume that the code is in a catkin workspace and CMake knows about the existence of Catkin. It should be already taken care of if you followed the instructions here. Then you can build it from the project folder:

mkdir build
cd build
cmake ..
make -j4
ctest -VV  # run unit tests, optional

It can also be built with catkin_tools if the code is inside catkin workspace:

catkin build depth_clustering

P.S. in case you don't use catkin build you should reconsider your decision.

How to run?

See examples. There are ROS nodes as well as standalone binaries. Examples include showing axis oriented bounding boxes around found objects (these start with show_objects_ prefix) as well as a node to save all segments to disk. The examples should be easy to tweak for your needs.

Run on real world data

Go to folder with binaries:

cd <path_to_project>/build/devel/lib/depth_clustering

Frank Moosmann's "Velodyne SLAM" Dataset

Get the data:

mkdir data/; wget http://www.mrt.kit.edu/z/publ/download/velodyneslam/data/scenario1.zip -O data/moosmann.zip; unzip data/moosmann.zip -d data/; rm data/moosmann.zip

Run a binary to show detected objects:

./show_objects_moosmann --path data/scenario1/

Alternatively, you can run the data from Qt GUI (as in video):

./qt_gui_app

Once the GUI is shown, click on OpenFolder button and choose the folder where you have unpacked the png files, e.g. data/scenario1/. Navigate the viewer with arrows and controls seen on screen.

Other data

There are also examples on how to run the processing on KITTI data and on ROS input. Follow the --help output of each of the examples for more details.

Also you can load the data from the GUI. Make sure you are loading files with correct extension (*.txt and *.bin for KITTI, *.png for Moosmann's data).

Documentation

You should be able to get Doxygen documentation by running:

cd doc/
doxygen Doxyfile.conf

Related publications

Please cite related papers if you use this code:

@InProceedings{bogoslavskyi16iros,
title     = {Fast Range Image-Based Segmentation of Sparse 3D Laser Scans for Online Operation},
author    = {I. Bogoslavskyi and C. Stachniss},
booktitle = {Proc. of The International Conference on Intelligent Robots and Systems (IROS)},
year      = {2016},
url       = {http://www.ipb.uni-bonn.de/pdfs/bogoslavskyi16iros.pdf}
}
@Article{bogoslavskyi17pfg,
title   = {Efficient Online Segmentation for Sparse 3D Laser Scans},
author  = {I. Bogoslavskyi and C. Stachniss},
journal = {PFG -- Journal of Photogrammetry, Remote Sensing and Geoinformation Science},
year    = {2017},
pages   = {1--12},
url     = {https://link.springer.com/article/10.1007%2Fs41064-016-0003-y},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].