All Projects → GokulNC → Setting Up Carla Reinforcement Learning

GokulNC / Setting Up Carla Reinforcement Learning

Reinforcement Learning Environment for CARLA Autonomous Driving Simulator

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Setting Up Carla Reinforcement Learning

Demos
Some JavaScript works published as demos, mostly ML or DS
Stars: ✭ 55 (-17.91%)
Mutual labels:  reinforcement-learning
Pgdrive
PGDrive: an open-ended driving simulator with infinite scenes from procedural generation
Stars: ✭ 60 (-10.45%)
Mutual labels:  reinforcement-learning
Drivebot
tensorflow deep RL for driving a rover around
Stars: ✭ 62 (-7.46%)
Mutual labels:  reinforcement-learning
Ml Surveys
📋 Survey papers summarizing advances in deep learning, NLP, CV, graphs, reinforcement learning, recommendations, graphs, etc.
Stars: ✭ 1,063 (+1486.57%)
Mutual labels:  reinforcement-learning
Malib
A Multi-agent Learning Framework
Stars: ✭ 58 (-13.43%)
Mutual labels:  reinforcement-learning
Data Science Best Resources
Carefully curated resource links for data science in one place
Stars: ✭ 1,104 (+1547.76%)
Mutual labels:  reinforcement-learning
Dqn
Implementation of q-learning using TensorFlow
Stars: ✭ 53 (-20.9%)
Mutual labels:  reinforcement-learning
Personae
📈 Personae is a repo of implements and environment of Deep Reinforcement Learning & Supervised Learning for Quantitative Trading.
Stars: ✭ 1,140 (+1601.49%)
Mutual labels:  reinforcement-learning
Nlg Rl
Accelerated Reinforcement Learning for Sentence Generation by Vocabulary Prediction
Stars: ✭ 59 (-11.94%)
Mutual labels:  reinforcement-learning
Max
Code for reproducing experiments in Model-Based Active Exploration, ICML 2019
Stars: ✭ 61 (-8.96%)
Mutual labels:  reinforcement-learning
Tictactoe
Tic Tac Toe Machine Learning
Stars: ✭ 56 (-16.42%)
Mutual labels:  reinforcement-learning
Learning2run
Our NIPS 2017: Learning to Run source code
Stars: ✭ 57 (-14.93%)
Mutual labels:  reinforcement-learning
Mario rl
Stars: ✭ 60 (-10.45%)
Mutual labels:  reinforcement-learning
Reinforcement Learning
Implementation of Reinforcement Learning algorithms in Python, based on Sutton's & Barto's Book (Ed. 2)
Stars: ✭ 55 (-17.91%)
Mutual labels:  reinforcement-learning
Drl papernotes
Notes and comments about Deep Reinforcement Learning papers
Stars: ✭ 65 (-2.99%)
Mutual labels:  reinforcement-learning
Reinforcepy
Collection of reinforcement learners implemented in python. Mainly including DQN and its variants
Stars: ✭ 54 (-19.4%)
Mutual labels:  reinforcement-learning
Nlp overview
Overview of Modern Deep Learning Techniques Applied to Natural Language Processing
Stars: ✭ 1,104 (+1547.76%)
Mutual labels:  reinforcement-learning
Mabalgs
👤 Multi-Armed Bandit Algorithms Library (MAB) 👮
Stars: ✭ 67 (+0%)
Mutual labels:  reinforcement-learning
Outlace.github.io
Machine learning and data science blog.
Stars: ✭ 65 (-2.99%)
Mutual labels:  reinforcement-learning
Galvanise zero
Learning from zero (mostly based off of AlphaZero) in General Game Playing.
Stars: ✭ 60 (-10.45%)
Mutual labels:  reinforcement-learning

Setting up CARLA simulator environment for Reinforcement Learning

Table of Contents

Introduction

If you didn't know, CARLA is an open-source simulator for autonomous driving research.

It can be used as an environment for training ADAS, and also for Reinforcement Learning.

This guide will help you set up the CARLA environment for RL. Most of my code here is inspired from Intel Coach's setup of CARLA. I thought it'd be helpful to have a separte guide for this, to implement our own RL algorithms on top of it, instead of relying on Nervana Coach.

Requirements

Setting up the CARLA Path

After downloading the release version, place in any accessible directory, preferably something like /home/username/CARLA or whatever.

Now open up your terminal, enter nano ~/.bashrc and include the PATH of the CARLA environment like:

export CARLA_ROOT=/home/username/CARLA

Getting the required files for RL

Just clone (or fork) this repo by

git clone https://github.com/GokulNC/Setting-Up-CARLA-RL

All the required files for Environment's RL interface is present in the Environment directory (which you need not worry about) Note: Most of the files are obtained from Intel Coach's interface for RL, with modifications from my side.

Playing with the Environment

The environment interface provided here is more or less similar to that of OpenAI Gym for standardization purpose ;)

To create an CARLA environment

from Environment.carla_environment_wrapper import CarlaEnvironmentWrapper as CarlaEnv

env = CarlaEnv()  # To create an env

Resetting the environment

# returns the initial output values (as described in sections below)
initial_observation = env.reset()

Taking an action

observation, reward, done, info = env.step(action_idx)

where action_idx is the discretized value of action corresponding to a specific action.

As of now, there are 9 discretized values, each corresponding to different actions as defined in self.actions of carla_environment_wrapper.py like

actions = {0: [0., 0.],
					1: [0., -self.steering_strength],
					2: [0., self.steering_strength],
					3: [self.gas_strength, 0.],
					4: [-self.brake_strength, 0],
					5: [self.gas_strength, -self.steering_strength],
					6: [self.gas_strength, self.steering_strength],
					7: [-self.brake_strength, -self.steering_strength],
					8: [-self.brake_strength, self.steering_strength]}

actions_description = ['NO-OP', 'TURN_LEFT', 'TURN_RIGHT', 'GAS', 'BRAKE',
									'GAS_AND_TURN_LEFT', 'GAS_AND_TURN_RIGHT',
									'BRAKE_AND_TURN_LEFT', 'BRAKE_AND_TURN_RIGHT']

(Feel free to modify it as you see fit)

Values returned from env.step() (after taking an action)

# observation   :   observation after taking the action

# To get RGB image from the observation:
state = observation['rgb_image']
# TODO: In future, will add supoort for LiDAR sensors, etc. as required

# reward       :   immediate reward after taking the action

# done          :   boolean True/False indicating if episode is finished
#                       (collision has occured or time limit exceeded)

# info          :   information about the action taken & consequences
# To get the id of the last action taken
last_action_idx = info['action']
# more info will be added later

Rendering the game after each action

CARLA automatically renders everything as you play (take actions/pass controls). So no need of explicitly rendering.
If you need to render the camera view,

env = CarlaEnv(is_render_enabled=True)  # To create an env

# To render after each action:
env.render()

Saving screenshots

env = CarlaEnv(save_screens=True)  # To create an env

# To save after each action:
env.save_screenshots()

Testing CARLA game as a human

I have included a file human_play.py which you can run by

python human_play.py

and play the game manually to get an understanding of it. (Make sure the focus is on the terminal window)
Use the arrow keys to play (Up to accelerate, Down to brake, Left/Right to steer)

Extras:

  • You can change resolution of server window, render window and other configs in Environment/carla_config.py
  • You can get the following outputs, instead of just RGB image:
    • For Segmentated output: env = CarlaEnv(cameras=['SemanticSegmentation']) and segmented_output = observation['segmented_image']
    • For depth output: env = CarlaEnv(cameras=['Depth']) and depth_map = observation['depth_map']
    • (Note: You can also use a combination of everything. For RGB output, cameras=['SceneFinal'])
      (To play with your own cameras, feel free to modify things as described here in docs)

TODO in future:

  • As of now, the CarlaEnvironmentWrapper supports both continous & hardcoded discretized values. I think discretized action values can be removed
  • Make it Gym compliant (for benchmarks)

Feel free to contribute!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].