All Projects → PaddlePaddle → Parl

PaddlePaddle / Parl

Licence: apache-2.0
A high-performance distributed training framework for Reinforcement Learning

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language
javascript
184084 projects - #8 most used programming language
shell
77523 projects
CMake
9771 projects
HTML
75241 projects

Projects that are alternatives of or similar to Parl

Tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Stars: ✭ 11,865 (+405.32%)
Mutual labels:  reinforcement-learning
Doudizhu
AI斗地主
Stars: ✭ 149 (-93.65%)
Mutual labels:  reinforcement-learning
Agents
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Stars: ✭ 2,135 (-9.07%)
Mutual labels:  reinforcement-learning
Show Adapt And Tell
Code for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017
Stars: ✭ 146 (-93.78%)
Mutual labels:  reinforcement-learning
Open Quadruped
An open-source 3D-printed quadrupedal robot. Intuitive gait generation through 12-DOF Bezier Curves. Full 6-axis body pose manipulation. Custom 3DOF Leg Inverse Kinematics Model accounting for offsets.
Stars: ✭ 148 (-93.7%)
Mutual labels:  reinforcement-learning
Tensorflow rlre
Reinforcement Learning for Relation Classification from Noisy Data(TensorFlow)
Stars: ✭ 150 (-93.61%)
Mutual labels:  reinforcement-learning
Articulations Robot Demo
Stars: ✭ 145 (-93.82%)
Mutual labels:  reinforcement-learning
Java Deep Learning Cookbook
Code for Java Deep Learning Cookbook
Stars: ✭ 156 (-93.36%)
Mutual labels:  reinforcement-learning
Minimalrl
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Stars: ✭ 2,051 (-12.65%)
Mutual labels:  reinforcement-learning
Iccv2019 Learningtopaint
ICCV2019 - A painting AI that can reproduce paintings stroke by stroke using deep reinforcement learning.
Stars: ✭ 1,995 (-15.03%)
Mutual labels:  reinforcement-learning
Chess Alpha Zero
Chess reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 1,868 (-20.44%)
Mutual labels:  reinforcement-learning
Study Reinforcement Learning
Studying Reinforcement Learning Guide
Stars: ✭ 147 (-93.74%)
Mutual labels:  reinforcement-learning
Tradzqai
Trading environnement for RL agents, backtesting and training.
Stars: ✭ 150 (-93.61%)
Mutual labels:  reinforcement-learning
Rl Book Challenge
self-studying the Sutton & Barto the hard way
Stars: ✭ 146 (-93.78%)
Mutual labels:  reinforcement-learning
Senseact
SenseAct: A computational framework for developing real-world robot learning tasks
Stars: ✭ 153 (-93.48%)
Mutual labels:  reinforcement-learning
Sumo Rl
A simple interface to instantiate Reinforcement Learning environments with SUMO for Traffic Signal Control. Compatible with Gym Env from OpenAI and MultiAgentEnv from RLlib.
Stars: ✭ 145 (-93.82%)
Mutual labels:  reinforcement-learning
Energy Py
Reinforcement learning for energy systems
Stars: ✭ 148 (-93.7%)
Mutual labels:  reinforcement-learning
Resources
Resources on various topics being worked on at IvLabs
Stars: ✭ 158 (-93.27%)
Mutual labels:  reinforcement-learning
Muzero
A structured implementation of MuZero
Stars: ✭ 156 (-93.36%)
Mutual labels:  reinforcement-learning
Gym Fx
Forex trading simulator environment for OpenAI Gym, observations contain the order status, performance and timeseries loaded from a CSV file containing rates and indicators. Work In Progress
Stars: ✭ 151 (-93.57%)
Mutual labels:  reinforcement-learning

PARL

English | 简体中文

Documentation Status Documentation Status Documentation Status Release

PARL is a flexible and high-efficient reinforcement learning framework.

About PARL

Features

Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.

Large Scale. Ability to support high-performance parallelization of training with thousands of CPUs and multi-GPUs.

Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.

Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.

Abstractions

abstractions

PARL aims to build an agent for training algorithms to perform complex tasks. The main abstractions introduced by PARL that are used to build an agent recursively are the following:

Model

Model is abstracted to construct the forward network which defines a policy network or critic network given state as input.

Algorithm

Algorithm describes the mechanism to update parameters in Model and often contains at least one model.

Agent

Agent, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.

Note: For more information about base classes, please visit our tutorial and API documentation.

Parallelization

PARL provides a compact API for distributed training, allowing users to transfer the code into a parallelized version by simply adding a decorator. For more information about our APIs for parallel training, please visit our documentation.
Here is a Hello World example to demonstrate how easy it is to leverage outer computation resources.

#============Agent.py=================
@parl.remote_class
class Agent(object):

    def say_hello(self):
        print("Hello World!")

    def sum(self, a, b):
        return a+b

parl.connect('localhost:8037')
agent = Agent()
agent.say_hello()
ans = agent.sum(1,5) # it runs remotely, without consuming any local computation resources

Two steps to use outer computation resources:

  1. use the parl.remote_class to decorate a class at first, after which it is transferred to be a new class that can run in other CPUs or machines.
  2. call parl.connect to initialize parallel communication before creating an object. Calling any function of the objects does not consume local computation resources since they are executed elsewhere.

PARL

As shown in the above figure, real actors (orange circle) are running at the cpu cluster, while the learner (blue circle) is running at the local gpu with several remote actors (yellow circle with dotted edge).

For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like IMPALA, A2C. For more details in usage please refer to these examples.

Install:

Dependencies

  • Python 3.6+(Python 3.8+ is preferable for distributed training).
  • paddlepaddle>=2.0 (Optional, if you only want to use APIs related to parallelization alone)
pip install parl

Getting Started

Several-points to get you started:

For absolute beginners, we also provide an introductory course on reinforcement learning (RL) : ( Video | Code )

Examples

NeurlIPS2018 Half-Cheetah Breakout
NeurlIPS2018

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].