All Projects → abhisheknaik96 → Multiagenttorcs

abhisheknaik96 / Multiagenttorcs

The multi-agent version of TORCS for developing control algorithms for fully autonomous driving in the cluttered, multi-agent settings of everyday life.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Multiagenttorcs

Awesome Carla
👉 CARLA resources such as tutorial, blog, code and etc https://github.com/carla-simulator/carla
Stars: ✭ 246 (+101.64%)
Mutual labels:  self-driving-car, autonomous-vehicles, reinforcement-learning
Metacar
A reinforcement learning environment for self-driving cars in the browser.
Stars: ✭ 337 (+176.23%)
Mutual labels:  self-driving-car, autonomous-vehicles, reinforcement-learning
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+5647.54%)
Mutual labels:  self-driving-car, autonomous-vehicles
Dig Into Apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Stars: ✭ 903 (+640.16%)
Mutual labels:  self-driving-car, autonomous-vehicles
Rc car ros
ROS package to control an autonomous RC vehicle based on Raspberry Pi3
Stars: ✭ 30 (-75.41%)
Mutual labels:  self-driving-car, autonomous-vehicles
Awesome Autonomous Vehicle
无人驾驶的资源列表中文版
Stars: ✭ 389 (+218.85%)
Mutual labels:  self-driving-car, autonomous-vehicles
Neurojs
A JavaScript deep learning and reinforcement learning library.
Stars: ✭ 4,344 (+3460.66%)
Mutual labels:  self-driving-car, reinforcement-learning
Duckietown.jl
Differentiable Duckietown
Stars: ✭ 29 (-76.23%)
Mutual labels:  self-driving-car, autonomous-vehicles
Self-Driving-Car
Lane Detection for Self Driving Car
Stars: ✭ 14 (-88.52%)
Mutual labels:  self-driving-car, autonomous-vehicles
Simulator
A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Stars: ✭ 1,260 (+932.79%)
Mutual labels:  self-driving-car, reinforcement-learning
Awesome Decision Making Reinforcement Learning
A selection of state-of-the-art research materials on decision making and motion planning.
Stars: ✭ 68 (-44.26%)
Mutual labels:  autonomous-vehicles, reinforcement-learning
Reinforcement Learning For Self Driving Cars
Project on design and implement neural network that maximises driving speed of self-driving car through reinforcement learning.
Stars: ✭ 85 (-30.33%)
Mutual labels:  self-driving-car, reinforcement-learning
Apollo
An open autonomous driving platform
Stars: ✭ 19,814 (+16140.98%)
Mutual labels:  self-driving-car, autonomous-vehicles
Deepdrive
Deepdrive is a simulator that allows anyone with a PC to push the state-of-the-art in self-driving
Stars: ✭ 628 (+414.75%)
Mutual labels:  self-driving-car, reinforcement-learning
Self Driving Truck
Self-Driving Truck in Euro Truck Simulator 2, trained via Reinforcement Learning
Stars: ✭ 307 (+151.64%)
Mutual labels:  self-driving-car, reinforcement-learning
Deepgtav
A plugin for GTAV that transforms it into a vision-based self-driving car research environment.
Stars: ✭ 926 (+659.02%)
Mutual labels:  self-driving-car, reinforcement-learning
Ngsim env
Learning human driver models from NGSIM data with imitation learning.
Stars: ✭ 96 (-21.31%)
Mutual labels:  autonomous-vehicles, reinforcement-learning
Error-State-Extended-Kalman-Filter
Vehicle State Estimation using Error-State Extended Kalman Filter
Stars: ✭ 100 (-18.03%)
Mutual labels:  self-driving-car, autonomous-vehicles
erdos
Dataflow system for building self-driving car and robotics applications.
Stars: ✭ 135 (+10.66%)
Mutual labels:  self-driving-car, autonomous-vehicles
Uselfdrivingsimulator
Self-Driving Car Simulator
Stars: ✭ 48 (-60.66%)
Mutual labels:  self-driving-car, autonomous-vehicles

MADRaS - Multi-Agent DRiving Simulator

This is a multi-agent version of TORCS, for multi-agent reinforcement learning. In other words, the multiple cars running simultaneously on a track can be controlled by different control algorithms - heuristic, reinforcement learning-based, etc.

Please check out the updated version of MADRaS here!

Dependencies

  • TORCS (the simulator)
  • Simulated Car Racing modules (the patch which creates a server-client model to expose the higher-level game features to the learning agent)
  • Python3 (all future development will be in Python3; an old Python2 branch also exists here)

Installation

It is assumed that you have TORCS installed (tested on version 1.3.6) from the source code on a machine with Ubuntu 14.04/16.04 LTS.

scr-client

Install the scr-client as follows:

  1. Download the scr-patch from here.
  2. Unpack the package scr-linux-patch.tgz in your base TORCS directory.
  3. This will create a new directory called scr-patch.
    cd scr-patch
  4. sh do_patch.sh (do_unpatch.sh to revert the modifications)
  5. Move to the parent TORCS directory
    cd ../
  6. Run the following commands:
    ./configure    
    make -j4    
    sudo make install -j4    
    sudo make datainstall -j4    
    

10 scr_server car should be available in the race configurations now.

  1. Download the C++ client from here.
  2. Unpack the package scr-client-cpp.tgz in your base TORCS directory.
  3. This will create a new directory called scr-client-cpp.
    cd scr-client-cpp
  4. make -j4
  5. At this point, multiple clients can join an instance of the TORCS game by:
    ./client    
    ./client port:3002
    
    Typical values are between 3001 and 3010 (3001 is the default)

Usage:

  1. Start a 'Quick Race' in TORCS in one terminal console (with the n agents being scr_*)
    torcs
    Close the TORCS window.
  2. From inside the multi-agent-torcs directory in one console:
    python3 playGame.py 3001
  3. From another console:
    python3 playGame.py 3002
    And so on...

In the game loop in playGame.py, the action at every timestep a_t can be supplied by any algorithm.

Note :

  1. playGame_DDPG.py has the code for a sample RL agent learning with the DDPG algorithm, while playGame.py has a dummy agent which just moves straight at every timestep.
  2. Headless rendering for multiple-agent learning is under development. Contributions and ideas would be greatly appreaciated!

For single-agent learning:

  1. Start a 'Quick Race' in TORCS in one terminal console. Choose only one scr car and as many as traffic cars as you want (preferably chenyi*1, since they're programmed to follow individual lanes at speeds low enough for the agent to learn to overtake)
  2. From inside the multi-agent-torcs directory in one console:
    python3 playGame_DDPG.py 3001
    or any other port.

Sample results for a DDPG agent learned to drive in traffic are available here.


Do check out the wiki for this project for in-depth information about TORCS and getting Deep (Reinforcement) Learning to work on it.


1 The chenyi* cars can be installed from Princeton's DeepDrive project, which also adds a few maps from training and testing the agents. The default cars in TORCS are all programmed heuristic racing agents, which do not serve as good stand-ins for 'traffic'. Hence, using chenyi's code is highly recommended.

Credits

The multi-agent learning simulator was developed by Abhishek Naik, extending ugo-nama-kun's gym-torcs, and yanpanlau's project under the guidance of Anirban Santara, Balaraman Ravindran, and Bharat Kaul, at Intel Labs.

Contributors

We believe MADRaS will enable new and veteran researchers in academia and the industry to make the dream of fully autonomous driving a reality. Towards the same, we believe that unlike the closed-source secretive technologies of the big players, this project will enable the community to work towards this goal together, pooling in thoughts and resources to achieve this dream faster. Hence, we're highly appreciative of all sorts of contributions, big or small, from fellow researchers and users :

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].