All Projects → apennisi → rgbd_person_tracking

apennisi / rgbd_person_tracking

Licence: GPL-3.0 license
R-GBD Person Tracking is a ROS framework for detecting and tracking people from a mobile robot.

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to rgbd person tracking

Ochumanapi
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
Stars: ✭ 168 (+265.22%)
Mutual labels:  detection, segmentation
Sinet
Camouflaged Object Detection, CVPR 2020 (Oral & Reported by the New Scientist Magazine)
Stars: ✭ 246 (+434.78%)
Mutual labels:  detection, segmentation
3d Pointcloud
Papers and Datasets about Point Cloud.
Stars: ✭ 179 (+289.13%)
Mutual labels:  detection, segmentation
Paddlex
PaddlePaddle End-to-End Development Toolkit(『飞桨』深度学习全流程开发工具)
Stars: ✭ 3,399 (+7289.13%)
Mutual labels:  detection, segmentation
TextBoxes
TextBoxes: A Fast Text Detector with a Single Deep Neural Network
Stars: ✭ 625 (+1258.7%)
Mutual labels:  detection, scene
Deep Learning For Tracking And Detection
Collection of papers, datasets, code and other resources for object tracking and detection using deep learning
Stars: ✭ 1,920 (+4073.91%)
Mutual labels:  detection, segmentation
Awesome Carla
👉 CARLA resources such as tutorial, blog, code and etc https://github.com/carla-simulator/carla
Stars: ✭ 246 (+434.78%)
Mutual labels:  detection, segmentation
Caffe Model
Caffe models (including classification, detection and segmentation) and deploy files for famouse networks
Stars: ✭ 1,258 (+2634.78%)
Mutual labels:  detection, segmentation
Scene Text Recognition
Scene text detection and recognition based on Extremal Region(ER)
Stars: ✭ 146 (+217.39%)
Mutual labels:  classifier, detection
Tensorflow Object Detection Tutorial
The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
Stars: ✭ 113 (+145.65%)
Mutual labels:  classifier, detection
Awesome Gan For Medical Imaging
Awesome GAN for Medical Imaging
Stars: ✭ 1,814 (+3843.48%)
Mutual labels:  detection, segmentation
BCNet
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]
Stars: ✭ 434 (+843.48%)
Mutual labels:  detection, segmentation
Model Quantization
Collections of model quantization algorithms
Stars: ✭ 118 (+156.52%)
Mutual labels:  detection, segmentation
RaspberryPi-4WD-Car
Yahboom 4WD smart robot with AI vision features for Raspberry Pi 4B
Stars: ✭ 31 (-32.61%)
Mutual labels:  robot, detection
Dstl unet
Dstl Satellite Imagery Feature Detection
Stars: ✭ 117 (+154.35%)
Mutual labels:  detection, segmentation
Com.unity.perception
Perception toolkit for sim2real training and validation
Stars: ✭ 208 (+352.17%)
Mutual labels:  detection, segmentation
Cnn Paper2
🎨 🎨 深度学习 卷积神经网络教程 :图像识别,目标检测,语义分割,实例分割,人脸识别,神经风格转换,GAN等🎨🎨 https://dataxujing.github.io/CNN-paper2/
Stars: ✭ 77 (+67.39%)
Mutual labels:  detection, segmentation
Dlcv for beginners
《深度学习与计算机视觉》配套代码
Stars: ✭ 1,244 (+2604.35%)
Mutual labels:  detection, segmentation
GapFlyt
GapFlyt: Active Vision Based Minimalist Structure-less Gap Detection For Quadrotor Flight
Stars: ✭ 30 (-34.78%)
Mutual labels:  robot, detection
Shadowless
A Fast and Open Source Autonomous Perception System.
Stars: ✭ 29 (-36.96%)
Mutual labels:  detection, segmentation

R-GBD Person Tracking

R-GBD Person Tracking (RGPT) is a ROS framework for detecting and tracking people from a mobile robot.

Requirements

AT requires the following packeges to build:

  • OpenCV
  • Boost
  • PCL
  • ROS Indigo
  • OpenMP

How to build

RGPT works under Linux 14.04 and ROS Indigo. For building the source, you have to put the repository inside your catking workspace and then follows the following command sequence:

  • rospack profile
  • catkin_make

How to setup

INPUT: Check the file people_detection_complete.launch inside the folder people_detection/launch

Example:
<launch>
	<arg name="prefix" value="/top_camera" />
	<node name="ground_detector" pkg="ground_detector" type="ground_detector_node" output="screen">
		<param name="theta" value="12"/> <!-- xtion tilt angle -->
		<param name="ty" value="1.5"/> <!-- xtion y traslation -->
		<param name="debug" value="false"/> <!-- show the segmentation output -->
		<param name="groundThreshold" value="0.05" /> <!-- under this threshold is considered ground --> 
		<param name="voxel_size" value="0.06" /> <!-- voxel size -->
		<param name="min_height" value="1.0" /> <!-- min blob height -->
		<param name="max_height" value="2.0" /> <!-- max blob height -->
		<param name="min_head_distance" value="0.3" /> <!-- min distance between two heads -->
		<param name="sampling_factor" value="3" /> <!-- sampling cloud factor -->
		<param name="apply_denoising" value="false" /> 
		<param name="mean_k_denoising" value="5" /> <!-- meanK for denoising (the higher it is, the stronger is the filtering) -->
		<param name="std_dev_denoising" value="0.3" /> <!-- standard deviation for denoising (the lower it is, the stronger is the filtering) -->
		<param name="max_distance" value="6" /> <!-- detection rate in meters -->
		<param name="depth_topic" value="$(arg prefix)/depth/image_raw" />
		<param name="camera_info_topic" value="$(arg prefix)/depth/camera_info" />
		<param name="rgb_topic" value="$(arg prefix)/rgb/image_raw" />			
	</node>
	
	<node name="dispatcher_node" pkg="dispatcher_node" type="dispatcher_node" output="screen">
		<param name="min" value="1.5"/> <!-- min value for tracking only the face -->
	</node>
	
	<node name="people_detector" pkg="people_detection" type="people_detection_node" output="screen">
		<param name="dataset" value="$(find people_detection)/config/inria_detector.xml"/> <!-- dataset filename -->
		<param name="confidence" value="65."/> <!--min confidence for considering the blob as a person -->
		<param name="image_scaling_factor" value="1.5"/><!--scaling factor for image detection (if you increase it, the detection speed increses and the precision decreses) -->
	</node>
	
	<node name="visual_tracker" pkg="visual_tracker" type="visual_tracker" output="screen">
		<param name="image_scaling_factor" value="1.5"/><!--scaling factor for image detection (if you increase it, the detection speed increses and the precision decreses) -->
	</node>
	
</launch>

OUTPUT: topic: /tracks Message type: Traks A vector containing: int32 id Point2i point2D Point3D point3D

How to use

Once the build phase has been successfully, you can use RGPT by launching the following command:

  • roslaunch people_detection people_detection_complete.launch

How it works:

RGPT is divided in three steps:

  • RGBD Segmentation
  • People Detection
  • People Tracking

You can find more information in the paper: COACHES: An assistance Multi-Robot System in public areas [link]

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].