All Projects → sea-bass → turtlebot3_behavior_demos

sea-bass / turtlebot3_behavior_demos

Licence: MIT license
Example repository for autonomous behaviors using TurtleBot3, as well as Docker + Make workflows in ROS based projects.

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
CMake
9771 projects
Makefile
30231 projects
shell
77523 projects

Projects that are alternatives of or similar to turtlebot3 behavior demos

2019-UGRP-DPoom
2019 DGIST DPoom project under UGRP : SBC and RGB-D camera based full autonomous driving system for mobile robot with indoor SLAM
Stars: ✭ 35 (-36.36%)
Mutual labels:  ros, turtlebot3
indires navigation
ROS packages for ground robot navigation and exploration
Stars: ✭ 90 (+63.64%)
Mutual labels:  ros
segment global planner
A ROS global planner plugin for segments tracking
Stars: ✭ 31 (-43.64%)
Mutual labels:  ros
ROS-Object-Detection-2Dto3D-RealsenseD435
Use the Intel D435 real-sensing camera to realize object detection based on the Yolov3-5 framework under the Opencv DNN(old version)/TersorRT(now) by ROS-melodic.Real-time display of the Pointcloud in the camera coordinate system.
Stars: ✭ 45 (-18.18%)
Mutual labels:  ros
hybrid planning experiments
Sampler + Optimizing Motion Planning Demonstrations
Stars: ✭ 23 (-58.18%)
Mutual labels:  ros
aztarna
aztarna, a footprinting tool for robots.
Stars: ✭ 85 (+54.55%)
Mutual labels:  ros
Turtlebot Navigation
This project was completed on May 15, 2015. The goal of the project was to implement software system for frontier based exploration and navigation for turtlebot-like robots.
Stars: ✭ 28 (-49.09%)
Mutual labels:  ros
dvrk env
Accurate URDF and SDF models of Intuitive Surgical's daVinici Research Kit (dVRK)
Stars: ✭ 13 (-76.36%)
Mutual labels:  ros
phidgets drivers
ROS drivers for various Phidgets devices
Stars: ✭ 30 (-45.45%)
Mutual labels:  ros
ros hadoop
Hadoop splittable InputFormat for ROS. Process rosbag with Hadoop Spark and other HDFS compatible systems.
Stars: ✭ 92 (+67.27%)
Mutual labels:  ros
mader
Trajectory Planner in Multi-Agent and Dynamic Environments
Stars: ✭ 252 (+358.18%)
Mutual labels:  ros
handsfree
HandsFree Open Source Robot Project
Stars: ✭ 139 (+152.73%)
Mutual labels:  ros
costmap depth camera
This is a costmap plugin for costmap_2d pkg. This plugin supports multiple depth cameras and run in real time.
Stars: ✭ 26 (-52.73%)
Mutual labels:  ros
aruco
Aruco marker detector and pose estimation for AR and Robotics with ROS support
Stars: ✭ 93 (+69.09%)
Mutual labels:  ros
rospberrypi
Everything you need to set up ROS Melodic on the Raspberry Pi Zero / W
Stars: ✭ 33 (-40%)
Mutual labels:  ros
ur5controller
OpenRAVE Controller Plugin for UR5 (Universal Robots UR5) Robot
Stars: ✭ 21 (-61.82%)
Mutual labels:  ros
ws moveit
This ROS melodic workspace is created on Ubuntu 18.04. Here I worked on moveit & MTC projects like Pick, Place, Pouring task for multi-manipulator system using MoveIt Task Constructor(MTC).
Stars: ✭ 25 (-54.55%)
Mutual labels:  ros
pocketsphinx
Updated ROS bindings to pocketsphinx
Stars: ✭ 36 (-34.55%)
Mutual labels:  ros
node example
ROS node examples with parameter server, dynamic reconfigure, timers, and custom messages for C++ and Python.
Stars: ✭ 90 (+63.64%)
Mutual labels:  ros
ros openvino
A ROS package to wrap openvino inference engine and get it working with Myriad and GPU
Stars: ✭ 57 (+3.64%)
Mutual labels:  ros

TurtleBot3 Behavior Demos

In this repository, we demonstrate autonomous behavior with a simulated ROBOTIS TurtleBot3 using Ubuntu 20.04 and ROS Noetic.

The autonomy in these examples are designed using behavior trees. For more information, refer to this blog post or the Behavior Trees in Robotics and AI textbook.

This also serves as an example for Docker + Make workflows in ROS based projects. For more information, refer to this blog post.

By Sebastian Castro, 2021


Setup

First, install Docker using the official install guide.

To run Docker containers with graphics and GPU support, you will also need the NVIDIA Container Toolkit.

To use GUI based tools (e.g., RViz, Gazebo) inside Docker, there is additional setup required. The simplest way is to run the command below each time you log into your machine, but there is a more detailed walkthrough of options in the ROS Wiki.

xhost + local:docker

Technically, you should be able to bypass Docker, directly clone this package to a Catkin workspace, and build it provided you have the necessary dependencies. As long as you can run the examples in the TurtleBot3 manual, you should be in good shape.

First, clone this repository and go into the top-level folder:

git clone https://github.com/sea-bass/turtlebot3_behavior_demos.git
cd turtlebot3_behavior_demos

Build the base and overlay Docker images. This will take a while and requires approximately 4 GB of disk space.

make build

Basic Usage

We use make to automate building, as shown above, but also for various useful entry points into the Docker container once it has been built. All make commands below should be run from your host machine, and not from inside the container.

To enter a Terminal in the overlay container:

make term

If you have an NVIDIA GPU and want to give your container access to the devices, add the following argument (this is true for all targets):

make term USE_GPU=true

You can verify that display in Docker works by starting a basic Gazebo simulation included in the standard TurtleBot3 packages:

make sim

Behavior Trees Demo

In this example, the robot navigates around known locations with the goal of finding a block of a specified color (red, green, or blue). Object detection is done using simple thresholding in the HSV color space with calibrated values.

To start the demo world, run the following command:

make demo-world

Behavior Trees in Python

To start the Python based demo, which uses py_trees:

make demo-behavior

You can also include arguments:

make demo-behavior TARGET_COLOR=green BT_TYPE=queue USE_GPU=true

Note that the behavior tree viewer (rqt_py_trees) does not select topics automatically. To view the tree, you should use the drop-down list to select the /autonomy_node/log/tree topic.

After starting the commands above (plus doing some waiting and window rearranging), you should see the following. The labeled images will appear once the robot reaches a target location.

Example demo screenshot

Behavior Trees in C++

To start the C++ demo, which uses BehaviorTrees.CPP:

make demo-behavior-cpp

You can also include arguments:

make demo-behavior-cpp TARGET_COLOR=green BT_TYPE=queue USE_GPU=true

Note that the behavior tree viewer (Groot) requires you to click the "Connect" button to display the active tree.

After starting the commands above (plus doing some waiting and window rearranging), you should see the following. The labeled images will appear once the robot reaches a target location.

Example demo screenshot

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].