All Projects → tsengapola → costmap_depth_camera

tsengapola / costmap_depth_camera

Licence: BSD-2-Clause license
This is a costmap plugin for costmap_2d pkg. This plugin supports multiple depth cameras and run in real time.

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to costmap depth camera

Adaptive clustering
Lightweight and Accurate Point Cloud Clustering
Stars: ✭ 125 (+380.77%)
Mutual labels:  point-cloud, ros
Graph Cnn In 3d Point Cloud Classification
Code for A GRAPH-CNN FOR 3D POINT CLOUD CLASSIFICATION (ICASSP 2018)
Stars: ✭ 206 (+692.31%)
Mutual labels:  point-cloud, pointcloud
Lidar camera calibration
Light-weight camera LiDAR calibration package for ROS using OpenCV and PCL (PnP + LM optimization)
Stars: ✭ 133 (+411.54%)
Mutual labels:  point-cloud, ros
Polylidar
Polylidar3D - Fast polygon extraction from 3D Data
Stars: ✭ 106 (+307.69%)
Mutual labels:  point-cloud, pointcloud
OpenDepthSensor
Open library to support Kinect V1 & V2 & Azure, RealSense and OpenNI-compatible sensors.
Stars: ✭ 61 (+134.62%)
Mutual labels:  depth-camera, realsense
Awesome Robotic Tooling
Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
Stars: ✭ 1,876 (+7115.38%)
Mutual labels:  point-cloud, ros
3d Bat
3D Bounding Box Annotation Tool (3D-BAT) Point cloud and Image Labeling
Stars: ✭ 179 (+588.46%)
Mutual labels:  point-cloud, pointcloud
Hdl graph slam
3D LIDAR-based Graph SLAM
Stars: ✭ 945 (+3534.62%)
Mutual labels:  point-cloud, ros
roofn3d
Roof Classification, Segmentation, and Damage Completion using 3D Point Clouds
Stars: ✭ 35 (+34.62%)
Mutual labels:  point-cloud, pointcloud
Intel-Realsense-Hand-Toolkit-Unity
Intel Realsense Toolkit for Hand tracking and Gestural Recognition on Unity3D
Stars: ✭ 72 (+176.92%)
Mutual labels:  depth-camera, realsense
Awesome Visual Slam
📚 The list of vision-based SLAM / Visual Odometry open source, blogs, and papers
Stars: ✭ 1,336 (+5038.46%)
Mutual labels:  point-cloud, ros
StructureNet
Markerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).
Stars: ✭ 38 (+46.15%)
Mutual labels:  depth-camera, realsense
Pointcnn
PointCNN: Convolution On X-Transformed Points (NeurIPS 2018)
Stars: ✭ 1,120 (+4207.69%)
Mutual labels:  point-cloud, pointcloud
Pcl Ros Cluster Segmentation
Cluster based segmentation of Point Cloud with PCL lib in ROS
Stars: ✭ 123 (+373.08%)
Mutual labels:  point-cloud, ros
Point Cloud Filter
Scripts showcasing filtering techniques applied to point cloud data.
Stars: ✭ 34 (+30.77%)
Mutual labels:  point-cloud, ros
Mvstudio
An integrated SfM (Structure from Motion) and MVS (Multi-View Stereo) solution.
Stars: ✭ 154 (+492.31%)
Mutual labels:  point-cloud, pointcloud
Pcl
Point Cloud Library (PCL)
Stars: ✭ 6,897 (+26426.92%)
Mutual labels:  point-cloud, pointcloud
Lidar camera calibration
ROS package to find a rigid-body transformation between a LiDAR and a camera for "LiDAR-Camera Calibration using 3D-3D Point correspondences"
Stars: ✭ 734 (+2723.08%)
Mutual labels:  point-cloud, ros
Cupoch
Robotics with GPU computing
Stars: ✭ 225 (+765.38%)
Mutual labels:  point-cloud, ros
ECCV-2020-point-cloud-analysis
ECCV 2020 papers focusing on point cloud analysis
Stars: ✭ 22 (-15.38%)
Mutual labels:  point-cloud, pointcloud

costmap_depth_camera

This is a costmap plugin for costmap_2d pkg

Considering ray casting method can not satisfy sparse 3D space problem of clearing. This plugin is based on kd-tree search to clear the markings.

This plugin comprises two parts:

  1. Marking pointcloud and store it using std::map.
  2. Clearing marked pointcloud using kd-tree search which is from Point Cloud Library (PCL).

Required package

costmap_2d

Demo: Use realsense with the point cloud published

Launch Example

  • Note that costmap_depth_camera.launch subscribe imu data from realsense D435i and create TF from map-->base_link. Change anything to meet your system requirements. If you have point cloud published, realsense is not needed.
  • Note that if you are using ros-noetic then the first line of imu_tf.py is python3 which is ok! If you are using ros-melodic, change it to python2! ex: #!/usr/bin/env python3 --> #!/usr/bin/env python2
roslaunch realsense2_camera rs_camera.launch
roslaunch costmap_depth_camera costmap_depth_camera.launch 

.yaml example of a workable setting:

costmap:
  global_frame: map
  robot_base_frame: base_link
  update_frequency: 5.0
  publish_frequency: 5
  static_map: false
  rolling_window: true
  width: 20
  height: 20
  resolution: 0.05  
  transform_tolerance: 1.0
  footprint: [[0.46,0.25], [0.36,0.37], [0.33,0.39], [-0.39,0.4], [-0.48,0], [-0.39,-0.4], [0.33,-0.39], [0.36,-0.37], [0.46,-0.25], [0.49,0]]

  plugins:
    - {name: static, type: 'costmap_2d::StaticLayer'}
    - {name: 3DPerception, type: 'costmap_depth_camera::DepthCameraObstacleLayer'}
    - {name: inflation, type: 'costmap_2d::InflationLayer'}

  static:
    map_topic: /map

  inflation:
    inflation_radius: 1.0
    cost_scaling_factor: 0.5

  3DPerception:
    use_global_frame_to_mark: true
    ec_seg_distance: 0.2
    ec_cluster_min_size: 5
    size_of_cluster_rejection: 5
    voxel_resolution: 0.01
    check_radius: 0.1
    enable_near_blocked_protection: false
    number_points_considered_as_blocked: 5
    forced_clearing_distance: 0.1
    observation_sources: depth_cam #depth_cam_left
    depth_cam: {sensor_frame: camera_link, topic: /camera/depth/color/points, expected_update_rate: 0.3, FOV_W: 1.0, FOV_V: 0.9, min_detect_distance: 0.15, max_detect_distance: 2.0, min_obstacle_height: 0.08}
    #depth_cam_left: {sensor_frame: camera_link_left, topic: /camera_left/depth/color/points, expected_update_rate: 0.3, FOV_W: 1.0, FOV_V: 0.9, min_detect_distance: 0.15, max_detect_distance: 2.0, min_obstacle_height: 0.08}

Parameters Description (many of them are supported in dynamic reconfigure)

  • use_global_frame_to_mark: true

This is the new feature in ver 2.0.

See below left video (use_global_frame_to_mark: false), you can find that when robot is moving, the marking is moving as the local frame. The evidence shows that when the realsense turns from right to left, the right marking is moving with robot.

While in the right video (use_global_frame_to_mark: true). The evidence shows that when the realsense turns from right to left, the right marking is kept in its marked location.

I would suggest that to set use_global_frame_to_mark as true if you are good localization system.

  • ec_seg_distance: 0.2
  • ec_cluster_min_size: 5
  • size_of_cluster_rejection: 5

Implement Euclidean Cluster Extraction. For ec_seg_distance, it is useful when some object is black and small, if it is connect to a bigger object, then size_of_cluster_rejection will not reject that small piece of black object.

For size_of_cluster_rejection, this is used to ignore small object in the space. It seems even though ec_cluster_min_size is set to 5, the Euclidean Cluster Extraction will still output the cluster small than 5 points. Therefore, we can use this function to ignore them (will not add in markings).

  • voxel_resolution: 0.01

Usually, 1 cm voxel height is enough.

  • number_clearing_threshold: 2
  • check_radius: 0.1

A kd-tree radius search of a marked point will be used to find the neighbors, if number of neighbors is higher than number_clearing_threshold. The marked point will be cleared. For example, a marked point only has one neighbor within 0.1 meter. So the marked point will be cleared.

  • forced_clearing_distance: 0.1

If a point appears within this distance, it will be cleared. The distance is calculated from the point to robot_base_frame. For example, a noise point appear in the robot footprint will be cleared.

  • number_points_considered_as_blocked: 5
  • enable_near_blocked_protection: true

If number of points in point cloud is less than 'number_points_considered_as_blocked', clearing mechanism is skip. For example, my robot can always see floor and thus the point cloud will always have points. If a new message arrive with few points, we simply assume the depth camera is blocked by something (mostly people's hand, they like to wave their hands in front of robot). When the depth camera detects nothing, it will clear all markings in the frustum, by enabling this protect function, the false clearing can be prevented.

TF requirement

Make sure the sensor_frame in yaml file follows the setting (in general case of realsense is camera_link), i.e.: z axes points to sky, x axes points to moving direction. The orange points show the frustum of my camera.

Acknowledge

Many thanks to Uniring MAX - The leader of robotic floor scrubber

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].