All Projects → takuseno → D3rlpy

takuseno / D3rlpy

Licence: mit
An offline deep reinforcement learning library

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to D3rlpy

Hierarchical Actor Critic Hac Pytorch
PyTorch implementation of Hierarchical Actor Critic (HAC) for OpenAI gym environments
Stars: ✭ 116 (-16.55%)
Mutual labels:  deep-reinforcement-learning
Pytorch Trpo
PyTorch Implementation of Trust Region Policy Optimization (TRPO)
Stars: ✭ 123 (-11.51%)
Mutual labels:  deep-reinforcement-learning
Adnet
Attention-guided CNN for image denoising(Neural Networks,2020)
Stars: ✭ 135 (-2.88%)
Mutual labels:  deep-reinforcement-learning
Deep reinforcement learning
Resources, papers, tutorials
Stars: ✭ 119 (-14.39%)
Mutual labels:  deep-reinforcement-learning
Rl Medical
Deep Reinforcement Learning (DRL) agents applied to medical images
Stars: ✭ 123 (-11.51%)
Mutual labels:  deep-reinforcement-learning
Muzero Pytorch
Pytorch Implementation of MuZero
Stars: ✭ 129 (-7.19%)
Mutual labels:  deep-reinforcement-learning
A3c Pytorch
PyTorch implementation of Advantage async actor-critic Algorithms (A3C) in PyTorch
Stars: ✭ 108 (-22.3%)
Mutual labels:  deep-reinforcement-learning
Machine Learning And Data Science
This is a repository which contains all my work related Machine Learning, AI and Data Science. This includes my graduate projects, machine learning competition codes, algorithm implementations and reading material.
Stars: ✭ 137 (-1.44%)
Mutual labels:  deep-reinforcement-learning
Rl Quadcopter
Teach a Quadcopter How to Fly!
Stars: ✭ 124 (-10.79%)
Mutual labels:  deep-reinforcement-learning
Ml Agents
Unity Machine Learning Agents Toolkit
Stars: ✭ 12,134 (+8629.5%)
Mutual labels:  deep-reinforcement-learning
Drl Portfolio Management
CSCI 599 deep learning and its applications final project
Stars: ✭ 121 (-12.95%)
Mutual labels:  deep-reinforcement-learning
Advanced Deep Learning And Reinforcement Learning Deepmind
🎮 Advanced Deep Learning and Reinforcement Learning at UCL & DeepMind | YouTube videos 👉
Stars: ✭ 121 (-12.95%)
Mutual labels:  deep-reinforcement-learning
Deep Reinforcement Learning In Large Discrete Action Spaces
Implementation of the algorithm in Python 3, TensorFlow and OpenAI Gym
Stars: ✭ 132 (-5.04%)
Mutual labels:  deep-reinforcement-learning
Reinforcementlearning Atarigame
Pytorch LSTM RNN for reinforcement learning to play Atari games from OpenAI Universe. We also use Google Deep Mind's Asynchronous Advantage Actor-Critic (A3C) Algorithm. This is much superior and efficient than DQN and obsoletes it. Can play on many games
Stars: ✭ 118 (-15.11%)
Mutual labels:  deep-reinforcement-learning
Policy Gradient
Minimal Monte Carlo Policy Gradient (REINFORCE) Algorithm Implementation in Keras
Stars: ✭ 135 (-2.88%)
Mutual labels:  deep-reinforcement-learning
Tetris Ai
A deep reinforcement learning bot that plays tetris
Stars: ✭ 109 (-21.58%)
Mutual labels:  deep-reinforcement-learning
A Deep Rl Approach For Sdn Routing Optimization
A Deep-Reinforcement Learning Approach for Software-Defined Networking Routing Optimization
Stars: ✭ 125 (-10.07%)
Mutual labels:  deep-reinforcement-learning
Finrl Library
FinRL: Financial Reinforcement Learning Framework. Please star. 🔥
Stars: ✭ 3,037 (+2084.89%)
Mutual labels:  deep-reinforcement-learning
Deep Qlearning Agent For Traffic Signal Control
A framework where a deep Q-Learning Reinforcement Learning agent tries to choose the correct traffic light phase at an intersection to maximize traffic efficiency.
Stars: ✭ 136 (-2.16%)
Mutual labels:  deep-reinforcement-learning
Keras Rl2
Reinforcement learning with tensorflow 2 keras
Stars: ✭ 134 (-3.6%)
Mutual labels:  deep-reinforcement-learning

d3rlpy: An offline deep reinforcement learning library

test build Documentation Status codecov Maintainability Gitter MIT

d3rlpy is an offline deep reinforcement learning library for practitioners and researchers.

import d3rlpy

# MDPDataset takes arrays of state transitions
dataset = d3rlpy.dataset.MDPDataset(observations, actions, rewards, terminals)

# train offline deep RL
cql = d3rlpy.algos.CQL()
cql.fit(dataset.episodes)

# ready to control
actions = cql.predict(x)

Documentation: https://d3rlpy.readthedocs.io

key features

⚡️ Most Practical RL Library Ever

  • offline RL: d3rlpy supports state-of-the-art offline RL algorithms. Offline RL is extremely powerful when the online interaction is not feasible during training (e.g. robotics, medical).
  • online RL: d3rlpy also supports conventional state-of-the-art online training algorithms without any compromising, which means that you can solve any kinds of RL problems only with d3rlpy.
  • advanced engineering: d3rlpy is designed to implement the faster and efficient training algorithms. For example, you can train Atari environments with x4 less memory space and as fast as the fastest RL library.

🔰 Easy-To-Use API

  • zero-knowledge of DL library: d3rlpy provides many state-of-the-art algorithms through intuitive APIs. You can become a RL engineer even without knowing how to use deep learning libraries.
  • scikit-learn compatibility: d3rlpy is not only easy, but also completely compatible with scikit-learn API, which means that you can maximize your productivity with the useful scikit-learn's utilities.

🚀 Beyond State-Of-The-Art

  • distributional Q function: d3rlpy is the first library that supports distributional Q functions in the all algorithms. The distributional Q function is known as the very powerful method to achieve the state-of-the-performance.
  • many tweek options: d3rlpy is also the first to support N-step TD backup, ensemble value functions and data augmentation in the all algorithms, which lead you to the place no one ever reached yet.

installation

d3rlpy supports Linux, macOS and Windows.

PyPI (recommended)

PyPI version PyPI - Downloads

$ pip install d3rlpy

Anaconda

Anaconda-Server Badge Anaconda-Server Badge Anaconda-Server Badge

$ conda install -c conda-forge d3rlpy

Docker

Docker Pulls

$ docker run -it --gpus all --name d3rlpy takuseno/d3rlpy:latest bash

build from source (please try this if core dumped error occurs)

$ git clone https://github.com/takuseno/d3rlpy
$ cd d3rlpy
$ pip install Cython numpy # if not installed yet
$ pip install -e .

supported algorithms

algorithm discrete control continuous control offline RL?
Behavior Cloning (supervised learning)
Deep Q-Network (DQN) ⛔️
Double DQN ⛔️
Deep Deterministic Policy Gradients (DDPG) ⛔️
Twin Delayed Deep Deterministic Policy Gradients (TD3) ⛔️
Soft Actor-Critic (SAC)
Batch Constrained Q-learning (BCQ)
Bootstrapping Error Accumulation Reduction (BEAR) ⛔️
Advantage-Weighted Regression (AWR)
Conservative Q-Learning (CQL) (recommended)
Advantage Weighted Actor-Critic (AWAC) ⛔️
Critic Reguralized Regression (CRR) ⛔️
Policy in Latent Action Space (PLAS) ⛔️

supported Q functions

other features

Basically, all features are available with every algorithm.

  • [x] evaluation metrics in a scikit-learn scorer function style
  • [x] export greedy-policy as TorchScript or ONNX
  • [x] parallel cross validation with multiple GPU
  • [x] model-based algorithm

examples

MuJoCo

import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_d4rl('hopper-medium-v0')

# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)

# train
cql.fit(dataset,
        eval_episodes=dataset,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.scorer.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.scorer.td_error_scorer
        })

See more datasets at d4rl.

Atari 2600

import d3rlpy
from sklearn.model_selection import train_test_split

# prepare dataset
dataset, env = d3rlpy.datasets.get_atari('breakout-expert-v0')

# split dataset
train_episodes, test_episodes = train_test_split(dataset, test_size=0.1)

# prepare algorithm
cql = d3rlpy.algos.DiscreteCQL(n_frames=4, q_func_factory='qr', scaler='pixel', use_gpu=True)

# start training
cql.fit(train_episodes,
        eval_episodes=test_episodes,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.scorer.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.scorer.td_error_scorer
        })

See more Atari datasets at d4rl-atari.

PyBullet

import d3rlpy

# prepare dataset
dataset, env = d3rlpy.datasets.get_pybullet('hopper-bullet-mixed-v0')

# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)

# start training
cql.fit(dataset,
        eval_episodes=dataset,
        n_epochs=100,
        scorers={
            'environment': d3rlpy.metrics.scorer.evaluate_on_environment(env),
            'td_error': d3rlpy.metrics.scorer.td_error_scorer
        })

See more PyBullet datasets at d4rl-pybullet.

Online Training

import d3rlpy
import gym

# prepare environment
env = gym.make('HopperBulletEnv-v0')
eval_env = gym.make('HopperBulletEnv-v0')

# prepare algorithm
sac = d3rlpy.algos.SAC(use_gpu=True)

# prepare replay buffer
buffer = d3rlpy.online.buffers.ReplayBuffer(maxlen=1000000, env=env)

# start training
sac.fit_online(env, buffer, n_steps=1000000, eval_env=eval_env)

tutorials

Try a cartpole example on Google Colaboratory!

  • offline RL tutorial: Open In Colab
  • online RL tutorial: Open In Colab

contributions

Any kind of contribution to d3rlpy would be highly appreciated! Please check the contribution guide.

The release planning can be checked at milestones.

community

Channel Link
Chat Gitter
Issues GitHub Issues
Discussion GitHub Discussions

family projects

Project Description
d4rl-pybullet An offline RL datasets of PyBullet tasks
d4rl-atari A d4rl-style library of Google's Atari 2600 datasets
MINERVA An out-of-the-box GUI tool for offline RL

citation

@misc{seno2020d3rlpy,
  author = {Takuma Seno},
  title = {d3rlpy: An offline deep reinforcement library},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/takuseno/d3rlpy}}
}

acknowledgement

This work is supported by Information-technology Promotion Agency, Japan (IPA), Exploratory IT Human Resources Project (MITOU Program) in the fiscal year 2020.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].