All Projects → FranckNdame → Drlkit

FranckNdame / Drlkit

Licence: mit
A High Level Python Deep Reinforcement Learning library. Great for beginners, prototyping and quickly comparing algorithms

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Drlkit

Gdrl
Grokking Deep Reinforcement Learning
Stars: ✭ 304 (+948.28%)
Mutual labels:  gpu, reinforcement-learning, deep-reinforcement-learning, numpy
Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (+786.21%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Naf Tensorflow
"Continuous Deep Q-Learning with Model-based Acceleration" in TensorFlow
Stars: ✭ 192 (+562.07%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Curl
CURL: Contrastive Unsupervised Representation Learning for Sample-Efficient Reinforcement Learning
Stars: ✭ 346 (+1093.1%)
Mutual labels:  gpu, reinforcement-learning, deep-reinforcement-learning
Rlenv.directory
Explore and find reinforcement learning environments in a list of 150+ open source environments.
Stars: ✭ 79 (+172.41%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Pytorch sac ae
PyTorch implementation of Soft Actor-Critic + Autoencoder(SAC+AE)
Stars: ✭ 94 (+224.14%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Drq
DrQ: Data regularized Q
Stars: ✭ 268 (+824.14%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Stars: ✭ 52,811 (+182006.9%)
Mutual labels:  gpu, tensor, numpy
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+1258.62%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Paac.pytorch
Pytorch implementation of the PAAC algorithm presented in Efficient Parallel Methods for Deep Reinforcement Learning https://arxiv.org/abs/1705.04862
Stars: ✭ 22 (-24.14%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Rl Book
Source codes for the book "Reinforcement Learning: Theory and Python Implementation"
Stars: ✭ 464 (+1500%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Trax
Trax — Deep Learning with Clear Code and Speed
Stars: ✭ 6,666 (+22886.21%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, numpy
Muzero General
MuZero
Stars: ✭ 1,187 (+3993.1%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Pytorch sac
PyTorch implementation of Soft Actor-Critic (SAC)
Stars: ✭ 174 (+500%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Deterministic Gail Pytorch
PyTorch implementation of Deterministic Generative Adversarial Imitation Learning (GAIL) for Off Policy learning
Stars: ✭ 44 (+51.72%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Rl algos
Reinforcement Learning Algorithms
Stars: ✭ 14 (-51.72%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Megengine
MegEngine 是一个快速、可拓展、易于使用且支持自动求导的深度学习框架
Stars: ✭ 4,081 (+13972.41%)
Mutual labels:  gpu, numpy, tensor
Cupy
NumPy & SciPy for GPU
Stars: ✭ 5,625 (+19296.55%)
Mutual labels:  gpu, tensor, numpy
Deepdrive
Deepdrive is a simulator that allows anyone with a PC to push the state-of-the-art in self-driving
Stars: ✭ 628 (+2065.52%)
Mutual labels:  gym, reinforcement-learning, deep-reinforcement-learning
Softlearning
Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.
Stars: ✭ 713 (+2358.62%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning

Gitter



Gitter Pytorch Gitter Gitter Gitter


A High Level Python Deep Reinforcement Learning library.
Great for beginners, prototyping and quickly comparing algorithms


Environments

Installation 📦

Install drlkit via pip

pip install drlkit

Usage 📖

1. Import the modules

import numpy as np
from agents.TorchAgent import TorchAgent
from utils.plot import Plot
from environments.wrapper import EnvironmentWrapper

2. Initialize the environment and the agent

ENV_NAME = "LunarLander-v2"
env = EnvironmentWrapper(ENV_NAME)
agent = TorchAgent(state_size=8, action_size=env.env.action_space.n, seed=0)

3. Train the agent

# Train the agent
env.fit(agent, n_episodes=1000)

4. Plot the results (optional)

# See the results
Plot.basic_plot(np.arange(len(env.scores)), env.scores, xlabel='Episode #', ylabel='Score')

5. Play 🎮

# Play trained agent
env.play(num_episodes=10, trained=True)

It is as simple as that! 🤯



Loading a model 🗃

ENV_NAME = "LunarLander-v2"
env = EnvironmentWrapper(ENV_NAME)
agent = TorchAgent(state_size=8, action_size=env.env.action_space.n, seed=0)

env.load_model(agent, "./models/LunarLander-v2-4477.pth")
env.play(num_episodes=10)

Play untrained agent

env.play(num_episodes=10, trained=False)

Environments

Play trained agent (4477 episodes, 3 hours)

env.play(num_episodes=10, trained=True)

Environments

Tested Environments ⛳️

Environment
LunarLander-v2
CartPole-v1
MountainCar-v0

Implemented Algorithms 📈

Done = ✔️ || In Progress = ➖ || Not done yet = ❌

Algorithms Status Tested
DQN ✔️ (1) ✔️
DDPG
PPO1
PPO2
A2C
SAC
TD3

👀 Next steps

  • [x] Implement DQN
  • [x] Test DQN
  • [ ] Finish DDPG
  • [ ] Implement PP01
  • [ ] Improve documentation

❤️ Contributing

This is an open source project, so feel free to contribute. How?

  • Open an issue.
  • Send feedback via email.
  • Propose your own fixes, suggestions and open a pull request with the changes.

✍🏾 Author

  • Franck Ndame

🚨 License

MIT License

Copyright (c) 2019 Franck Ndame

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].