All Projects → medipixel → Rl_algorithms

medipixel / Rl_algorithms

Licence: mit
Structural implementation of RL key algorithms

Programming Languages

python
139335 projects - #7 most used programming language
python3
1442 projects

Projects that are alternatives of or similar to Rl algorithms

Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+11.93%)
Mutual labels:  gym, reinforcement-learning, dqn, policy-gradient
Deep Rl Keras
Keras Implementation of popular Deep RL Algorithms (A3C, DDQN, DDPG, Dueling DDQN)
Stars: ✭ 395 (+12.22%)
Mutual labels:  gym, reinforcement-learning, dqn, policy-gradient
Reinforcement learning
Reinforcement learning tutorials
Stars: ✭ 82 (-76.7%)
Mutual labels:  reinforcement-learning, dqn, policy-gradient
Slm Lab
Modular Deep Reinforcement Learning framework in PyTorch. Companion library of the book "Foundations of Deep Reinforcement Learning".
Stars: ✭ 904 (+156.82%)
Mutual labels:  reinforcement-learning, dqn, policy-gradient
Torchrl
Highly Modular and Scalable Reinforcement Learning
Stars: ✭ 102 (-71.02%)
Mutual labels:  reinforcement-learning, dqn, policy-gradient
Reinforcement Learning With Tensorflow
Simple Reinforcement learning tutorials, 莫烦Python 中文AI教学
Stars: ✭ 6,948 (+1873.86%)
Mutual labels:  reinforcement-learning, dqn, policy-gradient
Explorer
Explorer is a PyTorch reinforcement learning framework for exploring new ideas.
Stars: ✭ 54 (-84.66%)
Mutual labels:  dqn, gym, policy-gradient
Reinforcement learning
강화학습에 대한 기본적인 알고리즘 구현
Stars: ✭ 100 (-71.59%)
Mutual labels:  reinforcement-learning, dqn, policy-gradient
Easy Rl
强化学习中文教程,在线阅读地址:https://datawhalechina.github.io/easy-rl/
Stars: ✭ 3,004 (+753.41%)
Mutual labels:  reinforcement-learning, dqn, policy-gradient
Reinforcement Learning
Minimal and Clean Reinforcement Learning Examples
Stars: ✭ 2,863 (+713.35%)
Mutual labels:  reinforcement-learning, dqn, policy-gradient
A2c
A Clearer and Simpler Synchronous Advantage Actor Critic (A2C) Implementation in TensorFlow
Stars: ✭ 169 (-51.99%)
Mutual labels:  gym, reinforcement-learning, policy-gradient
Torchrl
Pytorch Implementation of Reinforcement Learning Algorithms ( Soft Actor Critic(SAC)/ DDPG / TD3 /DQN / A2C/ PPO / TRPO)
Stars: ✭ 90 (-74.43%)
Mutual labels:  gym, reinforcement-learning, dqn
Atari
AI research environment for the Atari 2600 games 🤖.
Stars: ✭ 174 (-50.57%)
Mutual labels:  gym, reinforcement-learning, dqn
omd
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"
Stars: ✭ 43 (-87.78%)
Mutual labels:  dqn, gym
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (-36.93%)
Mutual labels:  dqn, policy-gradient
Deep-rl-mxnet
Mxnet implementation of Deep Reinforcement Learning papers, such as DQN, PG, DDPG, PPO
Stars: ✭ 26 (-92.61%)
Mutual labels:  dqn, policy-gradient
Ma Gym
A collection of multi agent environments based on OpenAI gym.
Stars: ✭ 226 (-35.8%)
Mutual labels:  gym, reinforcement-learning
Paddle-RLBooks
Paddle-RLBooks is a reinforcement learning code study guide based on pure PaddlePaddle.
Stars: ✭ 113 (-67.9%)
Mutual labels:  dqn, policy-gradient
Trpo
Trust Region Policy Optimization with TensorFlow and OpenAI Gym
Stars: ✭ 343 (-2.56%)
Mutual labels:  reinforcement-learning, policy-gradient
Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (-26.99%)
Mutual labels:  gym, reinforcement-learning

Language grade: Python License: MIT Code style: black All Contributors

Contents

Welcome!

This repository contains Reinforcement Learning algorithms which are being used for research activities at Medipixel. The source code will be frequently updated. We are warmly welcoming external contributors! :)

BC agent on LunarLanderContinuous-v2 RainbowIQN agent on PongNoFrameskip-v4 SAC agent on Reacher-v2

Contributors

Thanks goes to these wonderful people (emoji key):


Jinwoo Park (Curt)

💻

Kyunghwan Kim

💻

darthegg

💻

Mincheol Kim

💻

김민섭

💻

Leejin Jung

💻

Chris Yoon

💻

Jiseong Han

💻

Sehyun Hwang

🚧

This project follows the all-contributors specification.

Algorithms

  1. Advantage Actor-Critic (A2C)
  2. Deep Deterministic Policy Gradient (DDPG)
  3. Proximal Policy Optimization Algorithms (PPO)
  4. Twin Delayed Deep Deterministic Policy Gradient Algorithm (TD3)
  5. Soft Actor Critic Algorithm (SAC)
  6. Behaviour Cloning (BC with DDPG, SAC)
  7. From Demonstrations (DDPGfD, SACfD, DQfD)
  8. Rainbow DQN
  9. Rainbow IQN (without DuelingNet) - DuelingNet degrades performance
  10. Rainbow IQN (with ResNet)
  11. Recurrent Replay DQN (R2D1)
  12. Distributed Pioritized Experience Replay (Ape-X)
  13. Policy Distillation

Performance

We have tested each algorithm on some of the following environments.

❗Please note that this won't be frequently updated.

PongNoFrameskip-v4

RainbowIQN learns the game incredibly fast! It accomplishes the perfect score (21) within 100 episodes! The idea of RainbowIQN is roughly suggested from W. Dabney et al..

See W&B Log for more details. (The performance is measured on the commit 4248057)

pong_dqn

RainbowIQN with ResNet's performance and learning speed were similar to those of RainbowIQN. Also we confirmed that R2D1 (w/ Dueling, PER) converges well in the Pong enviornment, though not as fast as RainbowIQN (in terms of update step).

Although we were only able to test Ape-X DQN (w/ Dueling) with 4 workers due to limitations to computing power, we observed a significant speed-up in carrying out update steps (with batch size 512). Ape-X DQN learns Pong game in about 2 hours, compared to 4 hours for serial Dueling DQN.

See W&B Log for more details. (The performance is measured on the commit 9e897ad) pong dqn with resnet & rnn

apex dqn

LunarLander-v2 / LunarLanderContinuous-v2

We used these environments just for a quick verification of each algorithm, so some of experiments may not show the best performance.

👇 Click the following lines to see the figures.
LunarLander-v2: RainbowDQN, RainbowDQfD, R2D1


See W&B log for more details. (The performance is measured on the commit 9e897ad)

lunarlander-v2_dqn

LunarLanderContinuous-v2: A2C, PPO, DDPG, TD3, SAC


See W&B log for more details. (The performance is measured on the commit 9e897ad)

lunarlandercontinuous-v2_baselines

LunarLanderContinuous-v2: DDPG, DDPGfD, BC-DDPG


See W&B log for more details. (The performance is measured on the commit 9e897ad)

lunarlandercontinuous-v2_ddpg

LunarLanderContinuous-v2: SAC, SACfD, BC-SAC


See W&B log for more details. (The performance is measured on the commit 9e897ad)

lunarlandercontinuous-v2_sac

Reacher-v2

We reproduced the performance of DDPG, TD3, and SAC on Reacher-v2 (Mujoco). They reach the score around -3.5 to -4.5.

👇 Click the following the line to see the figures.
Reacher-v2: DDPG, TD3, SAC


See W&B Log for more details.

reacher-v2_baselines

Getting started

Prerequisites

  • This repository is tested on Anaconda virtual environment with python 3.6.1+
    $ conda create -n rl_algorithms python=3.7.9
    $ conda activate rl_algorithms
    
  • In order to run Mujoco environments (e.g. Reacher-v2), you need to acquire Mujoco license.

Installation

First, clone the repository.

git clone https://github.com/medipixel/rl_algorithms.git
cd rl_algorithms
For users

Install packages required to execute the code. It includes python setup.py install. Just type:

make dep
For developers

If you want to modify code you should configure formatting and linting settings. It automatically runs formatting and linting when you commit the code. Contrary to make dep command, it includes python setup.py develop. Just type:

make dev

After having done make dev, you can validate the code by the following commands.

make format  # for formatting
make test  # for linting

Usages

You can train or test algorithm on env_name if configs/env_name/algorithm.yaml exists. (configs/env_name/algorithm.yaml contains hyper-parameters)

python run_env_name.py --cfg-path <config-path>

e.g. running soft actor-critic on LunarLanderContinuous-v2.

python run_lunarlander_continuous_v2.py --cfg-path ./configs/lunarlander_continuous_v2/sac.yaml <other-options>

e.g. running a custom agent, if you have written your own configs: configs/env_name/ddpg-custom.yaml.

python run_env_name.py --cfg-path ./configs/lunarlander_continuous_v2/ddpg-custom.py

You will see the agent run with hyper parameter and model settings you configured.

Arguments for run-files

In addition, there are various argument settings for running algorithms. If you check the options to run file you should command

python <run-file> -h
  • --test
    • Start test mode (no training).
  • --off-render
    • Turn off rendering.
  • --log
    • Turn on logging using W&B.
  • --seed <int>
    • Set random seed.
  • --save-period <int>
    • Set saving period of model and optimizer parameters.
  • --max-episode-steps <int>
    • Set maximum episode step number of the environment. If the number is less than or equal to 0, it uses the default maximum step number of the environment.
  • --episode-num <int>
    • Set the number of episodes for training.
  • --render-after <int>
    • Start rendering after the number of episodes.
  • --load-from <save-file-path>
    • Load the saved models and optimizers at the beginning.

Show feature map with Grad-CAM and Saliency-map

You can show a feature map that the trained agent extract using Grad-CAM(Gradient-weighted Class Activation Mapping) and Saliency map.

Grad-CAM is a way of combining feature maps using the gradient signal, and produce a coarse localization map of the important regions in the image. You can use it by adding Grad-CAM config and --grad-cam flag when you run. For example:

python run_env_name.py --cfg-path <config-path> --test --grad-cam

The results will be rendered as follows:

You can also use Saliency-map in a similar way to Grad-CAM just by adding --saliency-map flag. Saliency-map need trained weight carried by --load-from flag.

python run_env_name.py --cfg-path <config-path> --load-from <save-file-path> --test --saliency-map

Saliency map will be stored in data/saliency_map

Both Grad-CAM and Saliency-map can be only used for the agent that uses convolutional layers like DQN for Pong environment. You can see feature maps of all the configured convolution layers.

Using policy distillation

We seperate the document about using policy distillation in rl_algorithms/distillation/README.md.

W&B for logging

We use W&B for logging of network parameters and others. For logging, please follow the steps below after requirement installation:

  1. Create a wandb account
  2. Check your API key in settings, and login wandb on your terminal: $ wandb login API_KEY
  3. Initialize wandb: $ wandb init

For more details, read W&B tutorial.

Class Diagram

Class diagram at #135.

❗This won't be frequently updated.

RL_Algorithms_ClassDiagram

Citing the Project

To cite this repository in publications:

@misc{rl_algorithms,
  author = {Kim, Kyunghwan and Lee, Chaehyuk and Jeong, Euijin and Han, Jiseong and Kim, Minseop and Yoon, Chris and Kim, Mincheol and Park, Jinwoo},
  title = {Medipixel RL algorithms},
  year = {2020},
  publisher = {Github},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/medipixel/rl_algorithms}},
}

References

  1. T. P. Lillicrap et al., "Continuous control with deep reinforcement learning." arXiv preprint arXiv:1509.02971, 2015.
  2. J. Schulman et al., "Proximal Policy Optimization Algorithms." arXiv preprint arXiv:1707.06347, 2017.
  3. S. Fujimoto et al., "Addressing function approximation error in actor-critic methods." arXiv preprint arXiv:1802.09477, 2018.
  4. T. Haarnoja et al., "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor." arXiv preprint arXiv:1801.01290, 2018.
  5. T. Haarnoja et al., "Soft Actor-Critic Algorithms and Applications." arXiv preprint arXiv:1812.05905, 2018.
  6. T. Schaul et al., "Prioritized Experience Replay." arXiv preprint arXiv:1511.05952, 2015.
  7. M. Andrychowicz et al., "Hindsight Experience Replay." arXiv preprint arXiv:1707.01495, 2017.
  8. A. Nair et al., "Overcoming Exploration in Reinforcement Learning with Demonstrations." arXiv preprint arXiv:1709.10089, 2017.
  9. M. Vecerik et al., "Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards."arXiv preprint arXiv:1707.08817, 2017
  10. V. Mnih et al., "Human-level control through deep reinforcement learning." Nature, 518 (7540):529–533, 2015.
  11. van Hasselt et al., "Deep Reinforcement Learning with Double Q-learning." arXiv preprint arXiv:1509.06461, 2015.
  12. Z. Wang et al., "Dueling Network Architectures for Deep Reinforcement Learning." arXiv preprint arXiv:1511.06581, 2015.
  13. T. Hester et al., "Deep Q-learning from Demonstrations." arXiv preprint arXiv:1704.03732, 2017.
  14. M. G. Bellemare et al., "A Distributional Perspective on Reinforcement Learning." arXiv preprint arXiv:1707.06887, 2017.
  15. M. Fortunato et al., "Noisy Networks for Exploration." arXiv preprint arXiv:1706.10295, 2017.
  16. M. Hessel et al., "Rainbow: Combining Improvements in Deep Reinforcement Learning." arXiv preprint arXiv:1710.02298, 2017.
  17. W. Dabney et al., "Implicit Quantile Networks for Distributional Reinforcement Learning." arXiv preprint arXiv:1806.06923, 2018.
  18. Ramprasaath R. Selvaraju et al., "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization." arXiv preprint arXiv:1610.02391, 2016.
  19. Kaiming He et al., "Deep Residual Learning for Image Recognition." arXiv preprint arXiv:1512.03385, 2015.
  20. Steven Kapturowski et al., "Recurrent Experience Replay in Distributed Reinforcement Learning." in International Conference on Learning Representations https://openreview.net/forum?id=r1lyTjAqYX, 2019.
  21. Horgan et al., "Distributed Prioritized Experience Replay." in International Conference on Learning Representations, 2018
  22. Simonyan et al., "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", 2013
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].