All Projects → amazon-archives → Aws Robomaker Sample Application Deepracer

amazon-archives / Aws Robomaker Sample Application Deepracer

Licence: mit
Use AWS RoboMaker and demonstrate running a simulation which trains a reinforcement learning (RL) model to drive a car around a track

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Aws Robomaker Sample Application Deepracer

Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (+144.76%)
Mutual labels:  ros, reinforcement-learning, deep-reinforcement-learning, rl
Learning To Communicate Pytorch
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch
Stars: ✭ 236 (+124.76%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, rl
Pytorch Drl
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single agent and multi-agent.
Stars: ✭ 233 (+121.9%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, rl
Muzero General
MuZero
Stars: ✭ 1,187 (+1030.48%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, rl
Rlenv.directory
Explore and find reinforcement learning environments in a list of 150+ open source environments.
Stars: ✭ 79 (-24.76%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, rl
Rl trading
An environment to high-frequency trading agents under reinforcement learning
Stars: ✭ 205 (+95.24%)
Mutual labels:  reinforcement-learning, rl, simulation
Rl Quadcopter
Teach a Quadcopter How to Fly!
Stars: ✭ 124 (+18.1%)
Mutual labels:  ros, reinforcement-learning, deep-reinforcement-learning
Ros2learn
ROS 2 enabled Machine Learning algorithms
Stars: ✭ 119 (+13.33%)
Mutual labels:  ros, reinforcement-learning, rl
Rad
RAD: Reinforcement Learning with Augmented Data
Stars: ✭ 268 (+155.24%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, rl
Drq
DrQ: Data regularized Q
Stars: ✭ 268 (+155.24%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, rl
Mushroom Rl
Python library for Reinforcement Learning.
Stars: ✭ 442 (+320.95%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, rl
Deepdrive
Deepdrive is a simulator that allows anyone with a PC to push the state-of-the-art in self-driving
Stars: ✭ 628 (+498.1%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, simulation
Awesome Robotics
A curated list of awesome links and software libraries that are useful for robots.
Stars: ✭ 478 (+355.24%)
Mutual labels:  ros, reinforcement-learning, simulation
Gibsonenv
Gibson Environments: Real-World Perception for Embodied Agents
Stars: ✭ 666 (+534.29%)
Mutual labels:  ros, reinforcement-learning, deep-reinforcement-learning
Snake
Artificial intelligence for the Snake game.
Stars: ✭ 1,241 (+1081.9%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Maze
Maze Applied Reinforcement Learning Framework
Stars: ✭ 85 (-19.05%)
Mutual labels:  reinforcement-learning, simulation
Autonomous Drone
This repository intends to enable autonomous drone delivery with the Intel Aero RTF drone and PX4 autopilot. The code can be executed both on the real drone or simulated on a PC using Gazebo. Its core is a robot operating system (ROS) node, which communicates with the PX4 autopilot through mavros. It uses SVO 2.0 for visual odometry, WhyCon for visual marker localization and Ewok for trajectoy planning with collision avoidance.
Stars: ✭ 87 (-17.14%)
Mutual labels:  ros, simulation
Treeqn
Stars: ✭ 77 (-26.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning
Simulator
A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Stars: ✭ 1,260 (+1100%)
Mutual labels:  ros, reinforcement-learning
Hand dapg
Repository to accompany RSS 2018 paper on dexterous hand manipulation
Stars: ✭ 88 (-16.19%)
Mutual labels:  reinforcement-learning, simulation

Deep Racer

This Sample Application runs a simulation which trains a reinforcement learning (RL) model to drive a car around a track.

AWS RoboMaker sample applications include third-party software licensed under open-source licenses and is provided for demonstration purposes only. Incorporation or use of RoboMaker sample applications in connection with your production workloads or a commercial products or devices may affect your legal rights or obligations under the applicable open-source licenses. Source code information can be found here.

Keywords: Reinforcement learning, AWS, RoboMaker

deepracer-hard-track-world.jpg

Requirements

  • ROS Kinetic / Melodic (optional) - To run the simulation locally. Other distributions of ROS may work, however they have not been tested
  • Gazebo (optional) - To run the simulation locally
  • An AWS S3 bucket - To store the trained reinforcement learning model
  • AWS RoboMaker - To run the simulation and to deploy the trained model to the robot

AWS Account Setup

AWS Credentials

You will need to create an AWS Account and configure the credentials to be able to communicate with AWS services. You may find AWS Configuration and Credential Files helpful.

AWS Permissions

To train the reinforcement learning model in simulation, you need an IAM role with the following policy. You can find instructions for creating a new IAM Policy here. In the JSON tab paste the following policy document:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "cloudwatch:PutMetricData",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogStreams",
                "s3:Get*",
                "s3:List*",
                "s3:Put*",
                "s3:DeleteObject"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

Usage

Training the model

Building the simulation bundle

cd simulation_ws
rosws update
rosdep install --from-paths src --ignore-src -r -y
colcon build
colcon bundle

Running the simulation

The following environment variables must be set when you run your simulation:

  • MARKOV_PRESET_FILE - Defines the hyperparameters of the reinforcement learning algorithm. This should be set to deepracer.py.
  • MODEL_S3_BUCKET - The name of the S3 bucket in which you want to store the trained model.
  • MODEL_S3_PREFIX - The path where you want to store the model.
  • WORLD_NAME - The track to train the model on. Can be one of easy_track, medium_track, or hard_track.
  • ROS_AWS_REGION - The region of the S3 bucket in which you want to store the model.
  • AWS_ACCESS_KEY_ID - The access key for the role you created in the "AWS Permissions" section
  • AWS_SECRET_ACCESS_KEY - The secret access key for the role you created in the "AWS Permissions" section
  • AWS_SESSION_TOKEN - The session token for the role you created in the "AWS Permissions" section

Once the environment variables are set, you can run local training using the roslaunch command

source simulation_ws/install/setup.sh
roslaunch deepracer_simulation local_training.launch

Seeing your robot learn

As the reinforcement learning model improves, the reward function will increase. You can see the graph of this reward function at

All -> AWSRoboMakerSimulation -> Metrics with no dimensions -> Metric Name -> DeepRacerRewardPerEpisode

You can think of this metric as an indicator into how well your model has been trained. If the graph has plateaus, then your robot has finished learning.

deepracer-metrics.png

Evaluating the model

Building the simulation bundle

You can reuse the bundle from the training phase again in the simulation phase.

Running the simulation

The evaluation phase requires that the same environment variables be set as in the training phase. Once the environment variables are set, you can run evaluation using the roslaunch command

source simulation_ws/install/setup.sh
roslaunch deepracer_simulation evaluation.launch

Troubleshooting

The robot does not look like it is training

The training algorithm has two phases. The first is when the reinforcement learning model is used to make the car move in the track, while the second is when the algorithm uses the information gathered in the first phase to improve the model. In the second phase, no new commands are sent to the car, meaning it will appear as if it is stopped, spinning in circles, or drifting off aimlessly.

Using this sample with AWS RoboMaker

You first need to install colcon. Python 3.5 or above is required.

apt-get update
apt-get install -y python3-pip python3-apt
pip3 install colcon-ros-bundle

After colcon is installed you need to build your robot or simulation, then you can bundle with:

# Bundling Simulation Application
cd simulation_ws
colcon bundle

This produces simulation_ws/bundle/output.tar. You'll need to upload this artifact to an S3 bucket. You can then use the bundle to create a simulation application, and create a simulation job in AWS RoboMaker.

License

Most of this code is licensed under the MIT-0 no-attribution license. However, the sagemaker_rl_agent package is licensed under Apache 2. See LICENSE.txt for further information.

How to Contribute

Create issues and pull requests against this Repository on Github

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].