All Projects → rlberry-py → rlberry

rlberry-py / rlberry

Licence: MIT license
An easy-to-use reinforcement learning library for research and education.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to rlberry

vsrl-framework
The Verifiably Safe Reinforcement Learning Framework
Stars: ✭ 42 (-66.13%)
Mutual labels:  reinforcement-learning-algorithms, reinforcement-learning-environments
multi car racing
An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.
Stars: ✭ 58 (-53.23%)
Mutual labels:  reinforcement-learning-environments
Reinforcement Learning
Deep Reinforcement Learning Algorithms implemented with Tensorflow 2.3
Stars: ✭ 61 (-50.81%)
Mutual labels:  reinforcement-learning-algorithms
Neural-Fictitous-Self-Play
Scalable Implementation of Neural Fictitous Self-Play
Stars: ✭ 52 (-58.06%)
Mutual labels:  reinforcement-learning-algorithms
Upside-Down-Reinforcement-Learning
Upside-Down Reinforcement Learning (⅂ꓤ) implementation in PyTorch. Based on the paper published by Jürgen Schmidhuber.
Stars: ✭ 64 (-48.39%)
Mutual labels:  reinforcement-learning-algorithms
Reinforcement-Learning-CheatSheet
Cheatsheet of Reinforcement Learning (Based on Sutton-Barto Book - 2nd Edition)
Stars: ✭ 22 (-82.26%)
Mutual labels:  reinforcement-learning-algorithms
TD3-BipedalWalkerHardcore-v2
Solve BipedalWalkerHardcore-v2 with TD3
Stars: ✭ 41 (-66.94%)
Mutual labels:  reinforcement-learning-algorithms
yarll
Combining deep learning and reinforcement learning.
Stars: ✭ 84 (-32.26%)
Mutual labels:  reinforcement-learning-algorithms
marltoolbox
A toolbox with the goal of speeding up research on bargaining in MARL (cooperation problems in MARL).
Stars: ✭ 25 (-79.84%)
Mutual labels:  reinforcement-learning-algorithms
reinforced-race
A model car learns driving along a track using reinforcement learning
Stars: ✭ 37 (-70.16%)
Mutual labels:  reinforcement-learning-algorithms
Recurrent-Deep-Q-Learning
Solving POMDP using Recurrent networks
Stars: ✭ 52 (-58.06%)
Mutual labels:  reinforcement-learning-algorithms
Deep-rl-mxnet
Mxnet implementation of Deep Reinforcement Learning papers, such as DQN, PG, DDPG, PPO
Stars: ✭ 26 (-79.03%)
Mutual labels:  reinforcement-learning-algorithms
reinforcement-learning-resources
A curated list of awesome reinforcement courses, video lectures, books, library and many more.
Stars: ✭ 38 (-69.35%)
Mutual labels:  reinforcement-learning-algorithms
RL-code-resources
A collection of Reinforcement Learning GitHub code resources divided by frameworks and environments
Stars: ✭ 51 (-58.87%)
Mutual labels:  reinforcement-learning-algorithms
agentmodels.org
Modeling agents with probabilistic programs
Stars: ✭ 66 (-46.77%)
Mutual labels:  reinforcement-learning-algorithms
course-content-dl
NMA deep learning course
Stars: ✭ 537 (+333.06%)
Mutual labels:  reinforcement-learning-algorithms
POMDP
Implementing a RL algorithm based upon a partially observable Markov decision process.
Stars: ✭ 31 (-75%)
Mutual labels:  reinforcement-learning-algorithms
king-pong
Deep Reinforcement Learning Pong Agent, King Pong, he's the best
Stars: ✭ 23 (-81.45%)
Mutual labels:  reinforcement-learning-algorithms
rl-algorithms
Reinforcement learning algorithms
Stars: ✭ 40 (-67.74%)
Mutual labels:  reinforcement-learning-algorithms
Deep Reinforcement Learning
Repo for the Deep Reinforcement Learning Nanodegree program
Stars: ✭ 4,012 (+3135.48%)
Mutual labels:  reinforcement-learning-algorithms

A Reinforcement Learning Library for Research and Education

pytest Documentation Status contributors Codacy codecov

Try it on Google Colab! Open In Colab


What is rlberry?

Writing reinforcement learning algorithms is fun! But after the fun, we have lots of boring things to implement: run our agents in parallel, average and plot results, optimize hyperparameters, compare to baselines, create tricky environments etc etc!

rlberry is a Python library that makes your life easier by doing all these things with a few lines of code, so that you can spend most of your time developing agents. rlberry also provides implementations of several RL agents, benchmark environments and many other useful tools.

Installation

Install the latest version for a stable release.

pip install rlberry

The documentation includes more installation instructions in particular for users that work with Jax.

Getting started

In our documentation, you will find quick starts to the library and a user guide with a few tutorials on using rlberry.

Also, we provide a handful of notebooks on Google colab as examples to show you how to use rlberry:

Content Description Link
Introduction to rlberry How to create an agent, optimize its hyperparameters and compare to a baseline. Open In Colab
Evaluating and optimizing agents Train a REINFORCE agent and optimize its hyperparameters Open In Colab

Changelog

See the changelog for a history of the chages made to rlberry.

Citing rlberry

If you use rlberry in scientific publications, we would appreciate citations using the following Bibtex entry:

@misc{rlberry,
    author = {Domingues, Omar Darwiche and Flet-Berliac, Yannis and Leurent, Edouard and M{\'e}nard, Pierre and Shang, Xuedong and Valko, Michal},
    doi = {10.5281/zenodo.5544540},
    month = {10},
    title = {{rlberry - A Reinforcement Learning Library for Research and Education}},
    url = {https://github.com/rlberry-py/rlberry},
    year = {2021}
}

Development notes

The modules listed below are experimental at the moment, that is, they are not thoroughly tested and are susceptible to evolve.

  • rlberry.network: Allows communication between a server and client via sockets, and can be used to run agents remotely.
  • rlberry.agents.experimental: Experimental agents that are not thoroughly tested.

About us

This project was initiated and is actively maintained by INRIA SCOOL team. More information here.

Contributing

Want to contribute to rlberry? Please check our contribution guidelines. If you want to add any new agents or environments, do not hesitate to open an issue!

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].