All Projects → learn-to-race → l2r

learn-to-race / l2r

Licence: GPL-2.0 License
Open-source reinforcement learning environment for autonomous racing.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects
Dockerfile
14818 projects

Projects that are alternatives of or similar to l2r

Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+18352.63%)
Mutual labels:  simulator, research, deep-reinforcement-learning, autonomous-driving
Gibsonenv
Gibson Environments: Real-World Perception for Embodied Agents
Stars: ✭ 666 (+1652.63%)
Mutual labels:  simulator, research, robotics, deep-reinforcement-learning
Habitat Lab
A modular high-level library to train embodied AI agents across a variety of tasks, environments, and simulators.
Stars: ✭ 587 (+1444.74%)
Mutual labels:  simulator, research, robotics, deep-reinforcement-learning
Airsim
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
Stars: ✭ 12,528 (+32868.42%)
Mutual labels:  simulator, research, deep-reinforcement-learning
Holodeck Engine
High Fidelity Simulator for Reinforcement Learning and Robotics Research.
Stars: ✭ 48 (+26.32%)
Mutual labels:  simulator, research, robotics
Holodeck
High Fidelity Simulator for Reinforcement Learning and Robotics Research.
Stars: ✭ 513 (+1250%)
Mutual labels:  simulator, research, robotics
Bullet3
Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.
Stars: ✭ 8,714 (+22831.58%)
Mutual labels:  simulator, robotics
Plankton
Open source simulator for maritime robotics researchers
Stars: ✭ 51 (+34.21%)
Mutual labels:  simulator, robotics
Pgdrive
PGDrive: an open-ended driving simulator with infinite scenes from procedural generation
Stars: ✭ 60 (+57.89%)
Mutual labels:  simulator, autonomous-driving
Awesome Emulators Simulators
A curated list of software emulators and simulators of PCs, home computers, mainframes, consoles, robots and much more...
Stars: ✭ 94 (+147.37%)
Mutual labels:  simulator, robotics
Webots
Webots Robot Simulator
Stars: ✭ 1,324 (+3384.21%)
Mutual labels:  simulator, robotics
Master-Thesis
Deep Reinforcement Learning in Autonomous Driving: the A3C algorithm used to make a car learn to drive in TORCS; Python 3.5, Tensorflow, tensorboard, numpy, gym-torcs, ubuntu, latex
Stars: ✭ 33 (-13.16%)
Mutual labels:  deep-reinforcement-learning, autonomous-driving
Osim Rl
Reinforcement learning environments with musculoskeletal models
Stars: ✭ 763 (+1907.89%)
Mutual labels:  simulator, deep-reinforcement-learning
Habitat Sim
A flexible, high-performance 3D simulator for Embodied AI research.
Stars: ✭ 1,098 (+2789.47%)
Mutual labels:  simulator, robotics
ad-xolib
C++ library for Parsing OpenScenario (1.1.1) & OpenDrive files (1.7) ASAM Specifications
Stars: ✭ 56 (+47.37%)
Mutual labels:  simulator, autonomous-driving
racing dreamer
Latent Imagination Facilitates Zero-Shot Transfer in Autonomous Racing
Stars: ✭ 31 (-18.42%)
Mutual labels:  deep-reinforcement-learning, autonomous-driving
Object-Goal-Navigation
Pytorch code for NeurIPS-20 Paper "Object Goal Navigation using Goal-Oriented Semantic Exploration"
Stars: ✭ 107 (+181.58%)
Mutual labels:  robotics, deep-reinforcement-learning
Hexapod Robot Simulator
A hexapod robot simulator built from first principles
Stars: ✭ 577 (+1418.42%)
Mutual labels:  simulator, robotics
jiminy
Jiminy: a fast and portable Python/C++ simulator of poly-articulated systems with OpenAI Gym interface for reinforcement learning
Stars: ✭ 90 (+136.84%)
Mutual labels:  simulator, robotics
Carla-ppo
This repository hosts a customized PPO based agent for Carla. The goal of this project is to make it easier to interact with and experiment in Carla with reinforcement learning based agents -- this, by wrapping Carla in a gym like environment that can handle custom reward functions, custom debug output, etc.
Stars: ✭ 122 (+221.05%)
Mutual labels:  deep-reinforcement-learning, autonomous-driving

Learn-to-Race

Learn-to-Race is an OpenAI gym compliant, multimodal control environment where agents learn how to race. Unlike many simplistic learning environments, ours is built around Arrival’s high-fidelity racing simulator featuring full software-in-the-loop (SIL), and even hardware-in-the-loop (HIL), simulation capabilities. This simulator has played a key role in bringing autonomous racing technology to real life in the Roborace series, the world’s first extreme competition of teams developing self-driving AI.


missing

An overview of the Learn-to-Race framework


Documentation

Please visit our official docs for a comprehensive guide on getting started with the environment. Happy racing!

Learn-to-Race Task

While learning-based agents continue to demonstrate superhuman performance in many areas, we believe that they still lack in terms of generalization abilities and often require too many interactions. In summary, agents will have the ability to learn on training racetracks, but will be evaluated on their performance on an unseen evaluation track. However, the evaluation track is not truly unseen. Much like a Formula-1 driver, we will let agents interact with the new track for 60 minutes during a pre-evaluation stage before true evaluation.

Baseline Agents

We provide multiple baseline agents to demonstrate how to use Learn-to-Race including both classical and learning-based controllers. The first is a RandomActionAgent to show basic functionality. We also include a Soft Actor-Critic agent, tabula rasa, trained for 1000 epsiodes. On the Las Vegas track, it is able to consistently complete laps in under 2 minutes each using only visual features from the virtual camera as input.


missing

Episode 1

missing

Episode 100

missing

Episode 1000


Customizable Sensor Configurations

One of the key features of this environment is the ability to create arbitrary configurations of vehicle sensors. This provides users a rich sandbox for multimodal, learning based approaches. The following sensors are supported and can be placed, if applicable, at any location relative to the vehicle:

  • RGB cameras
  • Depth cameras
  • Ground truth segmentation cameras
  • Fisheye cameras
  • Ray trace LiDARs
  • Depth 2D LiDARs
  • Radars

Additionally, these sensors are parameterized and can be customized further; for example, cameras have modifiable image size, field-of-view, and exposure. We provide a sample configuration below which has front, birdseye, and side facing cameras both in RGB mode and with ground truth segmentation.

Left Facing Front Facing Right Facing Birdseye
LeftRGB FrontRGB RightRGB Front
Left, Segmented Front, Segmented Right, Segmented Birdseye, Segmented

Please visit our documentation for more information about sensor customization.

Requirements

Python: We use Learn-to-Race with Python 3.6 or 3.7.

Graphics Hardware: An Nvidia graphics card & associated drives is required. An Nvidia 970 GTX graphics card is minimally sufficient to simply run the simulator, but a better card is recommended.

Docker: Commonly, the racing simulator runs in a Docker container.

Container GPU Access: If running the simulator in a container, the container needs access to the GPU, so nvidia-container-runtime is also required.

Installation

Due to the container GPU access requirement, this installation assumes a Linux operating system. If you do not have a Linux OS, we recommend running Learn-to-Race on a public cloud instance that has a sufficient GPU.

  1. Request access to the Racing simulator. We recommmend running the simulator as a Python subprocess which simply requires that you specify the path of the simulator in the env_kwargs.controller_kwargs.sim_path of your configuration file. Alternatively, you can run the simulator as a Docker container by setting env_kwargs.controller_kwargs.start_container to True. If you prefer the latter, you can load the docker image as follows:
$ docker load < arrival-sim-image.tar.gz
  1. Download the source code from this repository and install the package requirements. We recommend using a virtual environment:
$ pip install virtualenv
$ virtualenv venv                           # create new virtual environment
$ source venv/bin/activate                  # activate the environment
(venv) $ pip install -r requirements.txt 

Research

Please cite this work if you use L2R as a part of your research.

@misc{herman2021learntorace,
      title={Learn-to-Race: A Multimodal Control Environment for Autonomous Racing}, 
      author={James Herman and Jonathan Francis and Siddha Ganju and Bingqing Chen and Anirudh Koul and Abhinav Gupta and Alexey Skabelkin and Ivan Zhukov and Andrey Gostev and Max Kumskoy and Eric Nyberg},
      year={2021},
      eprint={2103.11575},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].