All Projects → IntelLabs → Coach

IntelLabs / Coach

Licence: apache-2.0
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms

Programming Languages

python
139335 projects - #7 most used programming language
Jupyter Notebook
11667 projects
CSS
56736 projects
Makefile
30231 projects
Dockerfile
14818 projects
Batchfile
5799 projects

Projects that are alternatives of or similar to Coach

Mushroom Rl
Python library for Reinforcement Learning.
Stars: ✭ 442 (-78.8%)
Mutual labels:  reinforcement-learning, openai-gym, rl, mujoco
Pytorch A2c Ppo Acktr Gail
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
Stars: ✭ 2,632 (+26.24%)
Mutual labels:  reinforcement-learning, mujoco, roboschool
Stable Baselines
Mirror of Stable-Baselines: a fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Stars: ✭ 115 (-94.48%)
Mutual labels:  reinforcement-learning, openai-gym, rl
Irl Imitation
Implementation of Inverse Reinforcement Learning (IRL) algorithms in python/Tensorflow. Deep MaxEnt, MaxEnt, LPIRL
Stars: ✭ 333 (-84.03%)
Mutual labels:  reinforcement-learning, rl, imitation-learning
Tianshou
An elegant PyTorch deep reinforcement learning library.
Stars: ✭ 4,109 (+97.07%)
Mutual labels:  imitation-learning, mujoco, rl
Gymfc
A universal flight control tuning framework
Stars: ✭ 210 (-89.93%)
Mutual labels:  reinforcement-learning, openai-gym, rl
Drq
DrQ: Data regularized Q
Stars: ✭ 268 (-87.15%)
Mutual labels:  reinforcement-learning, rl, mujoco
Ravens
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Stars: ✭ 133 (-93.62%)
Mutual labels:  reinforcement-learning, openai-gym, imitation-learning
Reinforcement learning
Implementation of selected reinforcement learning algorithms in Tensorflow. A3C, DDPG, REINFORCE, DQN, etc.
Stars: ✭ 132 (-93.67%)
Mutual labels:  reinforcement-learning, openai-gym, rl
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (-81.1%)
Mutual labels:  reinforcement-learning, openai-gym, mujoco
Gym Duckietown
Self-driving car simulator for the Duckietown universe
Stars: ✭ 379 (-81.82%)
Mutual labels:  reinforcement-learning, openai-gym, imitation-learning
Rl Baselines Zoo
A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
Stars: ✭ 839 (-59.76%)
Mutual labels:  reinforcement-learning, openai-gym, rl
Deterministic Gail Pytorch
PyTorch implementation of Deterministic Generative Adversarial Imitation Learning (GAIL) for Off Policy learning
Stars: ✭ 44 (-97.89%)
Mutual labels:  reinforcement-learning, openai-gym, imitation-learning
Cartpole
OpenAI's cartpole env solver.
Stars: ✭ 107 (-94.87%)
Mutual labels:  reinforcement-learning, openai-gym
Easy Rl
强化学习中文教程,在线阅读地址:https://datawhalechina.github.io/easy-rl/
Stars: ✭ 3,004 (+44.08%)
Mutual labels:  reinforcement-learning, imitation-learning
Ctc Executioner
Master Thesis: Limit order placement with Reinforcement Learning
Stars: ✭ 112 (-94.63%)
Mutual labels:  reinforcement-learning, openai-gym
Aws Robomaker Sample Application Deepracer
Use AWS RoboMaker and demonstrate running a simulation which trains a reinforcement learning (RL) model to drive a car around a track
Stars: ✭ 105 (-94.96%)
Mutual labels:  reinforcement-learning, rl
Gym Fx
Forex trading simulator environment for OpenAI Gym, observations contain the order status, performance and timeseries loaded from a CSV file containing rates and indicators. Work In Progress
Stars: ✭ 151 (-92.76%)
Mutual labels:  reinforcement-learning, openai-gym
Mjrl
Reinforcement learning algorithms for MuJoCo tasks
Stars: ✭ 162 (-92.23%)
Mutual labels:  reinforcement-learning, mujoco
Onnx
Open standard for machine learning interoperability
Stars: ✭ 11,829 (+467.34%)
Mutual labels:  mxnet, onnx

Coach

CI License Docs DOI

Coach Logo

Coach is a python reinforcement learning framework containing implementation of many state-of-the-art algorithms.

It exposes a set of easy-to-use APIs for experimenting with new RL algorithms, and allows simple integration of new environments to solve. Basic RL components (algorithms, environments, neural network architectures, exploration policies, ...) are well decoupled, so that extending and reusing existing components is fairly painless.

Training an agent to solve an environment is as easy as running:

coach -p CartPole_DQN -r

Fetch Slide Pendulum Starcraft
Doom Deathmatch CARLA MontezumaRevenge
Doom Health Gathering PyBullet Minitaur Gym Extensions Ant

Table of Contents

Benchmarks

One of the main challenges when building a research project, or a solution based on a published algorithm, is getting a concrete and reliable baseline that reproduces the algorithm's results, as reported by its authors. To address this problem, we are releasing a set of benchmarks that shows Coach reliably reproduces many state of the art algorithm results.

Installation

Note: Coach has only been tested on Ubuntu 16.04 LTS, and with Python 3.5.

For some information on installing on Ubuntu 17.10 with Python 3.6.3, please refer to the following issue: https://github.com/IntelLabs/coach/issues/54

In order to install coach, there are a few prerequisites required. This will setup all the basics needed to get the user going with running Coach on top of OpenAI Gym environments:

# General
sudo -E apt-get install python3-pip cmake zlib1g-dev python3-tk python-opencv -y

# Boost libraries
sudo -E apt-get install libboost-all-dev -y

# Scipy requirements
sudo -E apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran -y

# PyGame
sudo -E apt-get install libsdl-dev libsdl-image1.2-dev libsdl-mixer1.2-dev libsdl-ttf2.0-dev
libsmpeg-dev libportmidi-dev libavformat-dev libswscale-dev -y

# Dashboard
sudo -E apt-get install dpkg-dev build-essential python3.5-dev libjpeg-dev  libtiff-dev libsdl1.2-dev libnotify-dev 
freeglut3 freeglut3-dev libsm-dev libgtk2.0-dev libgtk-3-dev libwebkitgtk-dev libgtk-3-dev libwebkitgtk-3.0-dev
libgstreamer-plugins-base1.0-dev -y

# Gym
sudo -E apt-get install libav-tools libsdl2-dev swig cmake -y

We recommend installing coach in a virtualenv:

sudo -E pip3 install virtualenv
virtualenv -p python3 coach_env
. coach_env/bin/activate

Finally, install coach using pip:

pip3 install rl_coach

Or alternatively, for a development environment, install coach from the cloned repository:

cd coach
pip3 install -e .

If a GPU is present, Coach's pip package will install tensorflow-gpu, by default. If a GPU is not present, an Intel-Optimized TensorFlow, will be installed.

In addition to OpenAI Gym, several other environments were tested and are supported. Please follow the instructions in the Supported Environments section below in order to install more environments.

Getting Started

Tutorials and Documentation

Jupyter notebooks demonstrating how to run Coach from command line or as a library, implement an algorithm, or integrate an environment.

Framework documentation, algorithm description and instructions on how to contribute a new agent/environment.

Basic Usage

Running Coach

To allow reproducing results in Coach, we defined a mechanism called preset. There are several available presets under the presets directory. To list all the available presets use the -l flag.

To run a preset, use:

coach -r -p <preset_name>

For example:

  • CartPole environment using Policy Gradients (PG):

    coach -r -p CartPole_PG
  • Basic level of Doom using Dueling network and Double DQN (DDQN) algorithm:

    coach -r -p Doom_Basic_Dueling_DDQN

Some presets apply to a group of environment levels, like the entire Atari or Mujoco suites for example. To use these presets, the requeseted level should be defined using the -lvl flag.

For example:

  • Pong using the Neural Episodic Control (NEC) algorithm:

    coach -r -p Atari_NEC -lvl pong

There are several types of agents that can benefit from running them in a distributed fashion with multiple workers in parallel. Each worker interacts with its own copy of the environment but updates a shared network, which improves the data collection speed and the stability of the learning process. To specify the number of workers to run, use the -n flag.

For example:

  • Breakout using Asynchronous Advantage Actor-Critic (A3C) with 8 workers:

    coach -r -p Atari_A3C -lvl breakout -n 8

It is easy to create new presets for different levels or environments by following the same pattern as in presets.py

More usage examples can be found here.

Running Coach Dashboard (Visualization)

Training an agent to solve an environment can be tricky, at times.

In order to debug the training process, Coach outputs several signals, per trained algorithm, in order to track algorithmic performance.

While Coach trains an agent, a csv file containing the relevant training signals will be saved to the 'experiments' directory. Coach's dashboard can then be used to dynamically visualize the training signals, and track algorithmic behavior.

To use it, run:

dashboard

Coach Design

Distributed Multi-Node Coach

As of release 0.11.0, Coach supports horizontal scaling for training RL agents on multiple nodes. In release 0.11.0 this was tested on the ClippedPPO and DQN agents. For usage instructions please refer to the documentation here.

Batch Reinforcement Learning

Training and evaluating an agent from a dataset of experience, where no simulator is available, is supported in Coach. There are example presets and a tutorial.

Supported Environments

  • OpenAI Gym:

    Installed by default by Coach's installer (see note on MuJoCo version below).

  • ViZDoom:

    Follow the instructions described in the ViZDoom repository -

    https://github.com/mwydmuch/ViZDoom

    Additionally, Coach assumes that the environment variable VIZDOOM_ROOT points to the ViZDoom installation directory.

  • Roboschool:

    Follow the instructions described in the roboschool repository -

    https://github.com/openai/roboschool

  • GymExtensions:

    Follow the instructions described in the GymExtensions repository -

    https://github.com/Breakend/gym-extensions

    Additionally, add the installation directory to the PYTHONPATH environment variable.

  • PyBullet:

    Follow the instructions described in the Quick Start Guide (basically just - 'pip install pybullet')

  • CARLA:

    Download release 0.8.4 from the CARLA repository -

    https://github.com/carla-simulator/carla/releases

    Install the python client and dependencies from the release tarball:

    pip3 install -r PythonClient/requirements.txt
    pip3 install PythonClient
    

    Create a new CARLA_ROOT environment variable pointing to CARLA's installation directory.

    A simple CARLA settings file (CarlaSettings.ini) is supplied with Coach, and is located in the environments directory.

  • Starcraft:

    Follow the instructions described in the PySC2 repository -

    https://github.com/deepmind/pysc2

  • DeepMind Control Suite:

    Follow the instructions described in the DeepMind Control Suite repository -

    https://github.com/deepmind/dm_control

  • Robosuite:

    Note: To use Robosuite-based environments, please install Coach from the latest cloned repository. It is not yet available as part of the rl_coach package on PyPI.

    Follow the instructions described in the robosuite documentation (see note on MuJoCo version below).

Note on MuJoCo version

OpenAI Gym supports MuJoCo only up to version 1.5 (and corresponding mujoco-py version 1.50.x.x). The Robosuite simulation framework, however, requires MuJoCo version 2.0 (and corresponding mujoco-py version 2.0.2.9, as of robosuite version 1.2). Therefore, if you wish to run both Gym-based MuJoCo environments and Robosuite environments, it's recommended to have a separate virtual environment for each.

Please note that all Gym-Based MuJoCo presets in Coach (rl_coach/presets/Mujoco_*.py) have been validated only with MuJoCo 1.5 (including the reported benchmark results).

Supported Algorithms

Coach Design

Value Optimization Agents

Policy Optimization Agents

General Agents

Imitation Learning Agents

Hierarchical Reinforcement Learning Agents

Memory Types

Exploration Techniques

Citation

If you used Coach for your work, please use the following citation:

@misc{caspi_itai_2017_1134899,
  author       = {Caspi, Itai and
                  Leibovich, Gal and
                  Novik, Gal and
                  Endrawis, Shadi},
  title        = {Reinforcement Learning Coach},
  month        = dec,
  year         = 2017,
  doi          = {10.5281/zenodo.1134899},
  url          = {https://doi.org/10.5281/zenodo.1134899}
}

Contact

We'd be happy to get any questions or contributions through GitHub issues and PRs.

Please make sure to take a look here before filing an issue or proposing a PR.

The Coach development team can also be contacted over email

Disclaimer

Coach is released as a reference code for research purposes. It is not an official Intel product, and the level of quality and support may not be as expected from an official product. Additional algorithms and environments are planned to be added to the framework. Feedback and contributions from the open source and RL research communities are more than welcome.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].