All Projects → navneet-nmk → Pytorch-RL-CPP

navneet-nmk / Pytorch-RL-CPP

Licence: other
A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to Pytorch-RL-CPP

Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+439.73%)
Mutual labels:  openai-gym, gym, vae, mujoco
Dmc2gym
OpenAI Gym wrapper for the DeepMind Control Suite
Stars: ✭ 75 (+2.74%)
Mutual labels:  openai-gym, gym, deepmind
learning-to-drive-in-5-minutes
Implementation of reinforcement learning approach to make a car learn to drive smoothly in minutes
Stars: ✭ 227 (+210.96%)
Mutual labels:  openai, gym, vae
ddrl
Deep Developmental Reinforcement Learning
Stars: ✭ 27 (-63.01%)
Mutual labels:  openai-gym, openai, mujoco
CartPole
Run OpenAI Gym on a Server
Stars: ✭ 16 (-78.08%)
Mutual labels:  openai-gym, openai, gym
Deep Reinforcement Learning
Repo for the Deep Reinforcement Learning Nanodegree program
Stars: ✭ 4,012 (+5395.89%)
Mutual labels:  openai-gym, reinforcement-learning-algorithms
Mushroom Rl
Python library for Reinforcement Learning.
Stars: ✭ 442 (+505.48%)
Mutual labels:  openai-gym, mujoco
Super Mario Bros Ppo Pytorch
Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
Stars: ✭ 649 (+789.04%)
Mutual labels:  openai-gym, gym
Deterministic Gail Pytorch
PyTorch implementation of Deterministic Generative Adversarial Imitation Learning (GAIL) for Off Policy learning
Stars: ✭ 44 (-39.73%)
Mutual labels:  openai-gym, gym
Deep-Reinforcement-Learning-CS285-Pytorch
Solutions of assignments of Deep Reinforcement Learning course presented by the University of California, Berkeley (CS285) in Pytorch framework
Stars: ✭ 104 (+42.47%)
Mutual labels:  openai-gym, mujoco
Rl Baselines Zoo
A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
Stars: ✭ 839 (+1049.32%)
Mutual labels:  openai-gym, gym
Stable Baselines
Mirror of Stable-Baselines: a fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Stars: ✭ 115 (+57.53%)
Mutual labels:  openai-gym, gym
Ma Gym
A collection of multi agent environments based on OpenAI gym.
Stars: ✭ 226 (+209.59%)
Mutual labels:  openai-gym, gym
Rl Book
Source codes for the book "Reinforcement Learning: Theory and Python Implementation"
Stars: ✭ 464 (+535.62%)
Mutual labels:  openai-gym, gym
Deep-Q-Networks
Implementation of Deep/Double Deep/Dueling Deep Q networks for playing Atari games using Keras and OpenAI gym
Stars: ✭ 38 (-47.95%)
Mutual labels:  openai-gym, atari
Coach
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms
Stars: ✭ 2,085 (+2756.16%)
Mutual labels:  openai-gym, mujoco
yarll
Combining deep learning and reinforcement learning.
Stars: ✭ 84 (+15.07%)
Mutual labels:  openai-gym, reinforcement-learning-algorithms
pytorch-rl
Pytorch Implementation of RL algorithms
Stars: ✭ 15 (-79.45%)
Mutual labels:  openai-gym, reinforcement-learning-algorithms
jax-rl
JAX implementations of core Deep RL algorithms
Stars: ✭ 61 (-16.44%)
Mutual labels:  deepmind, mujoco
Drqn Tensorflow
Deep recurrent Q Learning using Tensorflow, openai/gym and openai/retro
Stars: ✭ 127 (+73.97%)
Mutual labels:  openai-gym, gym

Pytorch-RL-CPP

A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)

RlCpp is a reinforcement learning framework, written using the PyTorch C++ frontend.

RlCpp aims to be an extensible, reasonably optimized, production-ready framework for using reinforcement learning in projects where Python isn't viable. It should be ready to use in desktop applications on user's computers with minimal setup required on the user's side.

The Environment used is the C++ Port of Arcade Learning Environment

Currently Supported Models

The deep reinforcement learning community has made several independent improvements to the DQN algorithm. This repository presents latest extensions to the DQN algorithm:

  1. Playing Atari with Deep Reinforcement Learning [arxiv]
  2. Deep Reinforcement Learning with Double Q-learning [arxiv]
  3. Dueling Network Architectures for Deep Reinforcement Learning [arxiv]
  4. Prioritized Experience Replay [arxiv]
  5. Noisy Networks for Exploration [arxiv]
  6. A Distributional Perspective on Reinforcement Learning [arxiv]
  7. Rainbow: Combining Improvements in Deep Reinforcement Learning [arxiv]
  8. Distributional Reinforcement Learning with Quantile Regression [arxiv]
  9. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation [arxiv]
  10. Neural Episodic Control [arxiv]

Results for Pong using Double DQN

Environments (All Atari Environments)

  1. Breakout
  2. Pong
  3. Montezuma's Revenge (Current Research)
  4. Pitfall
  5. Gravitar
  6. CarRacing

Installing the dependencies

Arcade Learning Environment

Install main dependences:

sudo apt-get install libsdl1.2-dev libsdl-gfx1.2-dev libsdl-image1.2-dev cmake

Compilation:

$ mkdir build && cd build
$ cmake -DUSE_SDL=ON -DUSE_RLGLUE=OFF -DBUILD_EXAMPLES=ON ..
$ make -j 4

To install python module:

$ pip install .
or
$ pip install --user .

Getting the ALE to work on Visual Studio requires a bit of extra wrangling. You may wish to use IslandMan93's Visual Studio port of the ALE.

To ask questions and discuss, please join the ALE-users group.

Libtorch

Building

CMake is used for the build system. Most dependencies are included as submodules (run git submodule update --init --recursive to get them). Libtorch has to be installed seperately.

cd Reinforcement_CPP
cd build
cmake ..
make -j4

Before running, make sure to add libtorch/lib to your PATH environment variable.

Changes to cmake file

The CMake file requires some changes for things to run smoothly.

  1. After building ALE, link the libale.so
  2. Set torch dir, after building libtorch. Refer to the current CMakeLists.txt and make the relevant changes.

Future Plans

Plans to support:

  1. Runtime differences between C++ and Python.
  2. Python bindings for the Trainer module.
  3. More models and methods.
  4. Support for mujoco environment.

Stay tuned !

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].