All Projects → douglasrizzo → Dodo_detector_ros

douglasrizzo / Dodo_detector_ros

Licence: bsd-3-clause
Object detection from images/point cloud using ROS

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Dodo detector ros

Tf Object Detection
Simpler app for tensorflow object detection API
Stars: ✭ 91 (+193.55%)
Mutual labels:  object-detection, tensorflow-models
Pick Place Robot
Object picking and stowing with a 6-DOF KUKA Robot using ROS
Stars: ✭ 126 (+306.45%)
Mutual labels:  object-detection, ros
Kemono puyo
🐱 Take kemono pictures and lines up 3, then tanoshii
Stars: ✭ 94 (+203.23%)
Mutual labels:  object-detection, tensorflow-models
Mask rcnn ros
The ROS Package of Mask R-CNN for Object Detection and Segmentation
Stars: ✭ 53 (+70.97%)
Mutual labels:  object-detection, ros
Neuralet
Neuralet is an open-source platform for edge deep learning models on edge TPU, Jetson Nano, and more.
Stars: ✭ 200 (+545.16%)
Mutual labels:  object-detection, ros
Darknet ros
YOLO ROS: Real-Time Object Detection for ROS
Stars: ✭ 1,101 (+3451.61%)
Mutual labels:  object-detection, ros
Tensorflow Object Detection Tutorial
The purpose of this tutorial is to learn how to install and prepare TensorFlow framework to train your own convolutional neural network object detection classifier for multiple objects, starting from scratch
Stars: ✭ 113 (+264.52%)
Mutual labels:  object-detection, tensorflow-models
Ros openpose
ROS wrapper for OpenPose
Stars: ✭ 39 (+25.81%)
Mutual labels:  ros, kinect
Sarosperceptionkitti
ROS package for the Perception (Sensor Processing, Detection, Tracking and Evaluation) of the KITTI Vision Benchmark Suite
Stars: ✭ 193 (+522.58%)
Mutual labels:  object-detection, ros
Self Driving Golf Cart
Be Driven 🚘
Stars: ✭ 147 (+374.19%)
Mutual labels:  object-detection, ros
Math object detection
An image recognition/object detection model that detects handwritten digits and simple math operators. The output of the predicted objects (numbers & math operators) is then evaluated and solved.
Stars: ✭ 52 (+67.74%)
Mutual labels:  object-detection, tensorflow-models
Bmw Tensorflow Inference Api Gpu
This is a repository for an object detection inference API using the Tensorflow framework.
Stars: ✭ 277 (+793.55%)
Mutual labels:  object-detection, tensorflow-models
Ros yolo as template matching
Run 3 scripts to (1) Synthesize images (by putting few template images onto backgrounds), (2) Train YOLOv3, and (3) Detect objects for: one image, images, video, webcam, or ROS topic.
Stars: ✭ 32 (+3.23%)
Mutual labels:  object-detection, ros
Autonomous driving
Ros package for basic autonomous lane tracking and object detection
Stars: ✭ 67 (+116.13%)
Mutual labels:  object-detection, ros
Azure kinect ros driver
A ROS sensor driver for the Azure Kinect Developer Kit.
Stars: ✭ 140 (+351.61%)
Mutual labels:  ros, kinect
Opentpod
Open Toolkit for Painless Object Detection
Stars: ✭ 106 (+241.94%)
Mutual labels:  object-detection, tensorflow-models
True artificial intelligence
真AI人工智能
Stars: ✭ 38 (+22.58%)
Mutual labels:  ros, kinect
Tensorflow Anpr
Automatic Number (License) Plate Recognition using Tensorflow Object Detection API
Stars: ✭ 142 (+358.06%)
Mutual labels:  object-detection, tensorflow-models
Ros people object detection tensorflow
An extensive ROS toolbox for object detection & tracking and face/action recognition with 2D and 3D support which makes your Robot understand the environment
Stars: ✭ 202 (+551.61%)
Mutual labels:  object-detection, ros
Handeye calib camodocal
Easy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa.
Stars: ✭ 364 (+1074.19%)
Mutual labels:  ros, kinect

Object detection from images/point cloud using ROS

This ROS package creates an interface with dodo detector, a Python package that detects objects from images.

This package makes information regarding detected objects available in a topic, using a special kind of message.

When using an OpenNI-compatible sensor (like Kinect) the package uses point cloud information to locate objects in the world, wrt. to the sensor.

Click the image below for a YouTube video showcasing the package at work.

Youtube video

Installation

This repo is a ROS package, so it should be put alongside your other ROS packages inside the src directory of your catkin workspace.

The package depends mainly on a Python package, also created by me, called dodo detector. Check the README file over there for a list of dependencies unrelated to ROS, but related to object detection in Python.

Other ROS-related dependencies are listed on package.xml. If you want to use the provided launch files, you are going to need uvc_camera to start a webcam, freenect to access a Kinect for Xbox 360 or libfreenect2 and iai_kinect2 to start a Kinect for Xbox One.

If you use other kinds of sensor, make sure they provided an image topic and an optional point cloud topic, which will be needed later.

Usage

To use the package, first open the configuration file provided in config/main_config.yaml. These two global parameters must be configured for all types of detectors:

  • global_frame: the frame or tf that all object tfs will be published in relation to, eg map. Leave blank to publish wrt. camera_link.
  • tf_prefix: a prefix for the object tfs which will be published by the package.

Then, select which type of detector the package will use by setting the detector_type parameter. Acceptable values are sift, rootsift or tf.

tf uses the TensorFlow Object Detection API. It expects a label map and a directory with the saved model. You can find trained and saved models here or provide your own, by training it and then exporting it. After you have these files, configure the following parameters in config/main_config.yaml:

  • saved_model: path to the saved_model directory that is generated when an inference model is exported.
  • label_map: path to the label map, (the .pbtxt file).
  • tf_confidence: confidence level to report objects as detected by the neural network, between 0 and 1.

Take a look here to understand how these parameters are used by the backend.

If sift or rootsift are chosen, a keypoint object detector will be used. The following parameters must be set in config/main_config.yaml:

  • sift_min_pts: minimum number of points to consider an object as present in the scene.
  • sift_database_path: path to the database used by the keypoint object detector. Take a look here to understand how to set up the database directory.

Start the package

After all this configuration, you are ready to start the package. Either create your own .launch file or use one of the files provided in the launch directory of the repo.

In your launch file, load the config/main_config.yaml file you just configured in the previous step and provide an image_topic parameter to the detector.py node of the dodo_detector_ros package. This is the image topic that the package will use as input to detect objects.

You can also provide a point_cloud_topic parameter, which the package will use to position the objects detected in the image_topic in 3D space by publishing a TF for each detected object.

launch file examples

The example below initializes a webcam feed using the uvc_camera package and detects objects from the image_raw topic:

<?xml version="1.0"?>
<launch>
    <node name="camera" output="screen" pkg="uvc_camera" type="uvc_camera_node"/>
    
    <node name="dodo_detector_ros" pkg="dodo_detector_ros" type="detector.py" output="screen">
        <rosparam command="load" file="$(find dodo_detector_ros)/config/main_config.yaml"/>
        <param name="image_topic" value="/image_raw" />
    </node>
</launch>

The example below initializes a Kinect using the freenect package and subscribes to camera/rgb/image_color for images and /camera/depth/points for the point cloud:

<?xml version="1.0"?>
<launch>
    <include file="$(find freenect_launch)/launch/freenect.launch"/>
    
    <node name="dodo_detector_ros" pkg="dodo_detector_ros" type="detector.py" output="screen">
        <rosparam command="load" file="$(find dodo_detector_ros)/config/main_config.yaml"/>
        <param name="image_topic" value="/camera/rgb/image_color" />
        <param name="point_cloud_topic" value="/camera/depth/points" />
    </node>
</launch>

This example initializes a Kinect for Xbox One, using libfreenect2 and iai_kinect2 to connect to the device and subscribes to /kinect2/hd/image_color for images and /kinect2/hd/points for the point cloud. You can copy the launch file and use the sd and qhd topics instead of hd if you need more performance.

<?xml version="1.0"?>
<launch>    
    <include file="$(find kinect2_bridge)/launch/kinect2_bridge.launch">
        <param name="_depth_method" value="cpu" type="str"/>
    </include>
    
    <node name="dodo_detector_ros" pkg="dodo_detector_ros" type="detector.py" output="screen">
        <rosparam command="load" file="$(find dodo_detector_ros)/config/main_config.yaml"/>
        <param name="image_topic" value="/kinect2/hd/image_color" />
        <param name="point_cloud_topic" value="/kinect2/hd/points" />
    </node>
</launch>

These three launch files are provided inside the launch directory.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].