All Projects → xlnwel → model-free-algorithms

xlnwel / model-free-algorithms

Licence: other
TD3, SAC, IQN, Rainbow, PPO, Ape-X and etc. in TF1.x

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to model-free-algorithms

ReinforcementLearningZoo.jl
juliareinforcementlearning.org/
Stars: ✭ 46 (-17.86%)
Mutual labels:  rainbow, ddpg, sac, ppo, td3
ElegantRL
Scalable and Elastic Deep Reinforcement Learning Using PyTorch. Please star. 🔥
Stars: ✭ 2,074 (+3603.57%)
Mutual labels:  ddpg, sac, ppo, td3, model-free-rl
LWDRLC
Lightweight deep RL Libraray for continuous control.
Stars: ✭ 14 (-75%)
Mutual labels:  ddpg, sac, ppo, td3
Tianshou
An elegant PyTorch deep reinforcement learning library.
Stars: ✭ 4,109 (+7237.5%)
Mutual labels:  ddpg, sac, ppo, td3
Rainy
☔ Deep RL agents with PyTorch☔
Stars: ✭ 39 (-30.36%)
Mutual labels:  ddpg, sac, ppo, td3
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (+296.43%)
Mutual labels:  ddpg, sac, ppo, td3
Deeprl
Modularized Implementation of Deep RL Algorithms in PyTorch
Stars: ✭ 2,640 (+4614.29%)
Mutual labels:  rainbow, ddpg, ppo, td3
Minimalrl
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Stars: ✭ 2,051 (+3562.5%)
Mutual labels:  ddpg, sac, ppo
mujoco-benchmark
Provide full reinforcement learning benchmark on mujoco environments, including ddpg, sac, td3, pg, a2c, ppo, library
Stars: ✭ 101 (+80.36%)
Mutual labels:  ddpg, sac, ppo
Paddle-RLBooks
Paddle-RLBooks is a reinforcement learning code study guide based on pure PaddlePaddle.
Stars: ✭ 113 (+101.79%)
Mutual labels:  ddpg, sac, td3
TF2-RL
Reinforcement learning algorithms implemented for Tensorflow 2.0+ [DQN, DDPG, AE-DDPG, SAC, PPO, Primal-Dual DDPG]
Stars: ✭ 160 (+185.71%)
Mutual labels:  ddpg, sac, ppo
Deep Reinforcement Learning Algorithms
31 projects in the framework of Deep Reinforcement Learning algorithms: Q-learning, DQN, PPO, DDPG, TD3, SAC, A2C and others. Each project is provided with a detailed training log.
Stars: ✭ 167 (+198.21%)
Mutual labels:  ddpg, ppo
Machine Learning Is All You Need
🔥🌟《Machine Learning 格物志》: ML + DL + RL basic codes and notes by sklearn, PyTorch, TensorFlow, Keras & the most important, from scratch!💪 This repository is ALL You Need!
Stars: ✭ 173 (+208.93%)
Mutual labels:  ddpg, ppo
Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020
Live Trading. Please star.
Stars: ✭ 1,251 (+2133.93%)
Mutual labels:  ddpg, ppo
pomdp-baselines
Simple (but often Strong) Baselines for POMDPs in PyTorch - ICML 2022
Stars: ✭ 162 (+189.29%)
Mutual labels:  sac, td3
Machin
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
Stars: ✭ 145 (+158.93%)
Mutual labels:  ddpg, ppo
Torchrl
Pytorch Implementation of Reinforcement Learning Algorithms ( Soft Actor Critic(SAC)/ DDPG / TD3 /DQN / A2C/ PPO / TRPO)
Stars: ✭ 90 (+60.71%)
Mutual labels:  ddpg, ppo
Pytorch Drl
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single agent and multi-agent.
Stars: ✭ 233 (+316.07%)
Mutual labels:  ddpg, ppo
SRLF
Simple Reinforcement Learning Framework
Stars: ✭ 24 (-57.14%)
Mutual labels:  rainbow, ddpg
Reinforcement Learning With Tensorflow
Simple Reinforcement learning tutorials, 莫烦Python 中文AI教学
Stars: ✭ 6,948 (+12307.14%)
Mutual labels:  ddpg, ppo

Status. Archive (code is provided as-is, no updates expected)

Note. Please refer to my new repo for reinforcement learning algorithms in TF2.x

Algorithms Implemented

Algorithms are implemented in algo.

Overall Architecture

This repository is designed to represent a nice Tensorboard graph, which is useful for debugging.

Architecture

A typical graph looks like this:

TFGraph

Notes

Distributed Algorithms are implemented using Ray, a flexible, high-performance distributed execution framework.

Due to the lack of a Mujoco license, all algorithms for continuous control are first tested on the LunarLanderContinuous-v2 and later on BipedalWalker-v2 environments from OpenAI's Gym and solve them.

Rainbow, IQN is tested on CartPole-v0 and and steadily solves it. For Rainbow and IQN on Atari games, please refer to my another project

Performance figures and some further experimental results are recorded in on-policy algorithms and off-policy algorithms.

Best arguments are kept in "args.yaml" in each algorithm folder. If you want to modify some arguments, do not modify it in "args.yaml". It is better to first pass the experimental arguments to gs defined in run/train.py to verify that they do improve the performance.

Requirements

It is recommended to install Tensorflow from source following this instruction to gain some CPU boost and other potential benefits.

# Minimal requirements to run the algorithms. Tested on Ubuntu 18.04.2, using Tensorflow 1.13.1.
conda create -n gym python
conda activate gym
pip install -r requirements.txt
# Install tensorflow-gpu or install it from scratch as the above instruction suggests
pip install tensorflow-gpu

Running

# Silence tensorflow debug message
export TF_CPP_MIN_LOG_LEVEL=3
# When running distributed algorithms, restrict numpy to one core
# Use numpy.__config__.show() to ensure your numpy is using OpenBlas
# For MKL and detailed reasoning, refer to [this instruction](https://ray.readthedocs.io/en/latest/example-rl-pong.html?highlight=openblas#the-distributed-version)
export OPENBLAS_NUM_THREADS=1

# For full argument specification, please refer to run/train.py
python run/train.py -a sac

To add monitor so as to save videos automatically, set argument log_video to True in args.yaml.

Paper References

Timothy P. Lillicrap et al. Continuous Control with Deep Reinforcement Learning

Matteo Hessel et al. Rainbow: Combining Improvements in Deep Reinforcement Learning

Marc G. Bellemare et al. A Distributional Perspective on Reinforcement Learning

Scott Fujimoto et al. Addressing Function Approximation Error in Actor-Critic Methods (TD3)

Tuomas Haarnoja et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.

Tuomas Haarnoja et al. Soft Actor-Critic Algorithms and Applications

Dan Horgan et al. Distributed Prioritized Experience Replay

Hado van Hasselt et al. Deep Reinforcement Learning with Double Q-Learning

Tom Schaul et al. Prioritized Experience Replay

Meire Fortunato et al. Noisy Networks For Exploration

Ziyu Wang et la. Dueling Network Architectures for Deep Reinforcement Learning

Will Dabney et al. Implicit Quantile Networks for Distributional Reinforcement Learning

Berkeley cs294-112

Code References

OpenAI Baselines

Homework of Berkeley CS291-112

Google Dopamine

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].