All Projects → marooncn → Navbot

marooncn / Navbot

Licence: mit
Using RGB Image as Visual Input for Mapless Robot Navigation

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Navbot

Spatio temporal voxel layer
A new voxel layer leveraging modern 3D graphics tools to modernize navigation environmental representations
Stars: ✭ 246 (+121.62%)
Mutual labels:  ros, robot, navigation
Awesome Robotics
A curated list of awesome links and software libraries that are useful for robots.
Stars: ✭ 478 (+330.63%)
Mutual labels:  ros, robot, reinforcement-learning
Turtlebot Navigation
This project was completed on May 15, 2015. The goal of the project was to implement software system for frontier based exploration and navigation for turtlebot-like robots.
Stars: ✭ 28 (-74.77%)
Mutual labels:  robot, navigation, ros
Turtlebot3
ROS packages for Turtlebot3
Stars: ✭ 673 (+506.31%)
Mutual labels:  ros, robot, navigation
Turtlebot3 simulations
Simulations for TurtleBot3
Stars: ✭ 104 (-6.31%)
Mutual labels:  ros, robot, navigation
Fourth robot pkg
4号機(KIT-C4)用リポジトリ
Stars: ✭ 7 (-93.69%)
Mutual labels:  ros, robot, navigation
Quadruped 9g
A ROS node that describes a quadruped robot using URDF
Stars: ✭ 61 (-45.05%)
Mutual labels:  ros, robot
Drivebot
tensorflow deep RL for driving a rover around
Stars: ✭ 62 (-44.14%)
Mutual labels:  ros, reinforcement-learning
Panther
🐆 Panther is an Open Robotic AGV platform ROS based for Outdoor and Indoor enviroments.
Stars: ✭ 67 (-39.64%)
Mutual labels:  ros, robot
Simulator
A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Stars: ✭ 1,260 (+1035.14%)
Mutual labels:  ros, reinforcement-learning
Tianbot racecar
DISCONTINUED - MIGRATED TO TIANRACER - A Low cost Autonomous Driving Car Educational and Competition Kit
Stars: ✭ 26 (-76.58%)
Mutual labels:  ros, navigation
Ev3dev Lang Java
A project to learn Java and create software for Mindstorms Robots using hardware supported by EV3Dev & the LeJOS way.
Stars: ✭ 79 (-28.83%)
Mutual labels:  ros, robot
Rvd
Robot Vulnerability Database. An archive of robot vulnerabilities and bugs.
Stars: ✭ 87 (-21.62%)
Mutual labels:  ros, robot
Show trajectory
This repository collected 3 ways to show trajectory of robot in ROS
Stars: ✭ 48 (-56.76%)
Mutual labels:  ros, robot
True artificial intelligence
真AI人工智能
Stars: ✭ 38 (-65.77%)
Mutual labels:  ros, robot
Grid map
Universal grid map library for mobile robotic mapping
Stars: ✭ 1,135 (+922.52%)
Mutual labels:  ros, navigation
Ros Academy For Beginners
中国大学MOOC《机器人操作系统入门》代码示例 ROS tutorial
Stars: ✭ 861 (+675.68%)
Mutual labels:  ros, robot
Navigation
ROS Navigation stack. Code for finding where the robot is and how it can get somewhere else.
Stars: ✭ 1,248 (+1024.32%)
Mutual labels:  ros, navigation
Mrpt navigation
ROS nodes wrapping core MRPT functionality: localization, autonomous navigation, rawlogs, etc.
Stars: ✭ 90 (-18.92%)
Mutual labels:  ros, navigation
Webots
Webots Robot Simulator
Stars: ✭ 1,324 (+1092.79%)
Mutual labels:  ros, robot

navbot

It's a collection for mapless robot navigation using RGB image as visual input. It contains the test 
environment and motion planners, aiming at realizing all the three levels of mapless navigation:
1. memorizing efficiently; 
2. from memorizing to reasoning; 
3. more powerful reasoning
The experiment data is in ./materials/record folder. 


Environment

I built the environment as benchmark for testing the algorithms.

It has the following properties:
  • Diverse complexity.
  • Gym-style Interface.
  • Support ROS.

Quickstart example code to use this benckmark.

import env
maze0 = env.GazeboMaze(maze_id=0, continuous=True)
observation = maze0.reset()
done = False
while not done:
     # Stochastic strategy
     action = dict()
     action['linear_vel'] = np.random.uniform(0, 1)
     action['angular_vel'] = np.random.uniform(-1, 1)
     observation, done, reward = maze0.execute(action)
     print(action, reward)
maze0.close()

1. Memorizing

VAE-based planner

VAE Structure and Training

The designed VAE strcture is shown in the lower left figure. Train it in maze1 and maze2. The kl_tolerace is set to 0.5 (We stop optimizing for KL loss term once it is lower than some level, rather than letting it go to near zero) and latent dim is 32, thus the total loss is trained as close as possible to 16.


The following results are tested in maze3 to verify the ability of generalization.

Planner Structure

VAE-based planner & Baseline network structure

Performance

  1. The proposed trajectory is blue and the baseline is green.

  2. The success rate comparision in maze1.
  1. Performance comparision
SPL Benchmark Proposed
maze1 0.702 0.703
maze2 0.611 0.626

That is, the proposed motion planner not only has much better sample-efficience, but also it has better performance. Actually, the shortest path in two mazes are both found by proposed motion planner (26 timesteps in maze1 and 29 time steps in maze2 with acceleration in simulation).

2. From Memorizing to Reasoning

Stacked LSTM and network structure

Stacked LSTM

network structure

Result

Success rate in maze1

Install

Ddependencies

tensorflow: 1.5.0
OS: Ubuntu 16.04
Python: 2.7
OpenCV: 3
ROS: Kinetic
Gazebo: 7
tensorforce: https://github.com/tensorforce/tensorforce

# install tensorflow-gpu after cudnn and cuda are installed
pip install tensorflow-gpu==1.5.0
# or just use tensorflow-cpu if no Nvidia GPU, it can also work.
pip install tensorflow==1.5.0
# install OpenCV: https://docs.opencv.org/master/d7/d9f/tutorial_linux_install.html
# install ROS: http://wiki.ros.org/kinetic/Installation/Ubuntu
# install Gazebo 
sudo apt-get install gazebo7 libgazebo7-dev
# install old version that supports python2 of tensorforce form source

Run

sudo apt-get install ros-kinetic-gazebo-ros-pkgs ros-kinetic-gazebo-ros-control
sudo apt-get install ros-kinetic-turtlebot-*
sudo apt-get remove ros-kinetic-turtlebot-description
sudo apt-get install ros-kinetic-kobuki-description
# change to catkin_ws/src
git clone https://github.com/marooncn/navbot
cd ..
catkin_make
source ./devel/setup.bash
# you can change the configure in config.py
cd src/navbot/rl_nav/scripts
# run the proposed model for memorizing
python PPO.py
# run the proposed model for reasoning
python E2E_PPO_rnn.py

Details

  1. The default environment is maze1, you need to change maze_id in nav_gazebo.launch and config.py if you want change the environment.

  2. To execute 01_generate_data.py to generate data, you need to comment the goal-related code in nav_gazebo.launch and env.py.

  3. maze1 and maze2 are speeded up 10 times to train, if you want speed up other environments, just change

    <max_step_size>0.001</max_step_size>
    <real_time_factor>1</real_time_factor>
    

    to

    <max_step_size>0.01</max_step_size>
    <!-- <real_time_factor>1</real_time_factor> -->
    

    in the environment file in worlds.

  4. To reproduce the result, please change the related parameters in config.py according to config.txt.

  5. PPO is not a deterministic policy gradient algorithm, the action at every timestep is sampled according to the distribution. It can be seen as "noise" and it's useful for explorations and generalizations. If you want to use the best strategy after the model is trained, just change 'deterministic = True' in config.py and the performance will be improved.

Cite

If your find the work is helpful in your research, please cite the following papers:

Blog

Introduction to tensorforce(Chinese)
Introduction to this work(Chinese)

Reference

tensorforce(blog)
gym_gazebo
gazebo
roslaunch python API
turtlebot_description
kobuki_description
WorldModelsExperiments(official)
WorldModels(by Applied Data Science)

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].