All Projects → Acmece → Rl Collision Avoidance

Acmece / Rl Collision Avoidance

Implementation of the paper "Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning"

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Rl Collision Avoidance

Ros2learn
ROS 2 enabled Machine Learning algorithms
Stars: ✭ 119 (-4.8%)
Mutual labels:  ros, reinforcement-learning, ppo
Torch Ac
Recurrent and multi-process PyTorch implementation of deep reinforcement Actor-Critic algorithms A2C and PPO
Stars: ✭ 70 (-44%)
Mutual labels:  reinforcement-learning, ppo
Drivebot
tensorflow deep RL for driving a rover around
Stars: ✭ 62 (-50.4%)
Mutual labels:  ros, reinforcement-learning
Reinforcement learning
Reinforcement learning tutorials
Stars: ✭ 82 (-34.4%)
Mutual labels:  reinforcement-learning, ppo
Learning2run
Our NIPS 2017: Learning to Run source code
Stars: ✭ 57 (-54.4%)
Mutual labels:  reinforcement-learning, ppo
Mario rl
Stars: ✭ 60 (-52%)
Mutual labels:  reinforcement-learning, ppo
Sc2aibot
Implementing reinforcement-learning algorithms for pysc2 -environment
Stars: ✭ 83 (-33.6%)
Mutual labels:  reinforcement-learning, ppo
Gibsonenv
Gibson Environments: Real-World Perception for Embodied Agents
Stars: ✭ 666 (+432.8%)
Mutual labels:  ros, reinforcement-learning
Torchrl
Pytorch Implementation of Reinforcement Learning Algorithms ( Soft Actor Critic(SAC)/ DDPG / TD3 /DQN / A2C/ PPO / TRPO)
Stars: ✭ 90 (-28%)
Mutual labels:  reinforcement-learning, ppo
Aws Robomaker Sample Application Deepracer
Use AWS RoboMaker and demonstrate running a simulation which trains a reinforcement learning (RL) model to drive a car around a track
Stars: ✭ 105 (-16%)
Mutual labels:  ros, reinforcement-learning
Easy Rl
强化学习中文教程,在线阅读地址:https://datawhalechina.github.io/easy-rl/
Stars: ✭ 3,004 (+2303.2%)
Mutual labels:  reinforcement-learning, ppo
Slm Lab
Modular Deep Reinforcement Learning framework in PyTorch. Companion library of the book "Foundations of Deep Reinforcement Learning".
Stars: ✭ 904 (+623.2%)
Mutual labels:  reinforcement-learning, ppo
Reinforcement Learning With Tensorflow
Simple Reinforcement learning tutorials, 莫烦Python 中文AI教学
Stars: ✭ 6,948 (+5458.4%)
Mutual labels:  reinforcement-learning, ppo
Doom Net Pytorch
Reinforcement learning models in ViZDoom environment
Stars: ✭ 113 (-9.6%)
Mutual labels:  reinforcement-learning, ppo
Deeprl Tutorials
Contains high quality implementations of Deep Reinforcement Learning algorithms written in PyTorch
Stars: ✭ 748 (+498.4%)
Mutual labels:  reinforcement-learning, ppo
Run Skeleton Run
Reason8.ai PyTorch solution for NIPS RL 2017 challenge
Stars: ✭ 83 (-33.6%)
Mutual labels:  reinforcement-learning, ppo
Super Mario Bros Ppo Pytorch
Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
Stars: ✭ 649 (+419.2%)
Mutual labels:  reinforcement-learning, ppo
Pytorch Rl
PyTorch implementation of Deep Reinforcement Learning: Policy Gradient methods (TRPO, PPO, A2C) and Generative Adversarial Imitation Learning (GAIL). Fast Fisher vector product TRPO.
Stars: ✭ 658 (+426.4%)
Mutual labels:  reinforcement-learning, ppo
Simulator
A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Stars: ✭ 1,260 (+908%)
Mutual labels:  ros, reinforcement-learning
Navbot
Using RGB Image as Visual Input for Mapless Robot Navigation
Stars: ✭ 111 (-11.2%)
Mutual labels:  ros, reinforcement-learning

rl-collision-avoidance

This is a Pytorch implementation of the paper Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning

Requirement

How to train

You may start with training in Stage1 and when it is well-trained you can transfer to Stage2 base on the policy model of Stage1, this is exactly what Curriculum Learning means. Training Stage2 from scratch may converge at a lower performance or not even converge. Please note that the motivation of training in Stage2 is to generalize the model, which hopefully can work well in real environment.

Please use the stage_ros-add_pose_and_crash package instead of the default package provided by ROS.

mkdir -p catkin_ws/src
cp stage_ros-add_pose_and_crash catkin_ws/src
cd catkin_ws
catkin_make
source devel/setup.bash

To train Stage1, modify the hyper-parameters in ppo_stage1.py as you like, and running the following command:

(leave out the -g if you want to see the GUI while training)
rosrun stage_ros_add_pose_and_crash stageros -g worlds/stage1.world
mpiexec -np 24 python ppo_stage1.py

To train Stage2, modify the hyper-parameters in ppo_stage2.py as you like, and running the following command:

rosrun stage_ros_add_pose_and_crash stageros -g worlds/stage2.world
mpiexec -np 44 python ppo_stage2.py

How to test

rosrun stage_ros_add_pose_and_crash stageros worlds/circle.world
mpiexec -np 50 python circle_test.py

Notice

I am not the author of the paper and not in their group either. You may contact Jia Pan ([email protected]) for the paper related issues. If you find it useful and use it in your project, please consider citing:

@misc{Tianyu2018,
	author = {Tianyu Liu},
	title = {Robot Collision Avoidance via Deep Reinforcement Learning},
	year = {2018},
	publisher = {GitHub},
	journal = {GitHub repository},
	howpublished = {\url{https://github.com/Acmece/rl-collision-avoidance.git}},
	commit = {7bc682403cb9a327377481be1f110debc16babbd}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].