All Projects → mpatacchiola → Dissecting Reinforcement Learning

mpatacchiola / Dissecting Reinforcement Learning

Licence: mit
Python code, PDFs and resources for the series of posts on Reinforcement Learning which I published on my personal blog

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Dissecting Reinforcement Learning

Reinforcement learning tutorial with demo
Reinforcement Learning Tutorial with Demo: DP (Policy and Value Iteration), Monte Carlo, TD Learning (SARSA, QLearning), Function Approximation, Policy Gradient, DQN, Imitation, Meta Learning, Papers, Courses, etc..
Stars: ✭ 442 (-13.67%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic, q-learning
Learningx
Deep & Classical Reinforcement Learning + Machine Learning Examples in Python
Stars: ✭ 241 (-52.93%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, q-learning
Deep Rl Trading
playing idealized trading games with deep reinforcement learning
Stars: ✭ 228 (-55.47%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, q-learning
Reinforcement Learning
Minimal and Clean Reinforcement Learning Examples
Stars: ✭ 2,863 (+459.18%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Tensorflow Reinforce
Implementations of Reinforcement Learning Models in Tensorflow
Stars: ✭ 480 (-6.25%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Pytorch A2c Ppo Acktr Gail
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
Stars: ✭ 2,632 (+414.06%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Pytorch Drl
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single agent and multi-agent.
Stars: ✭ 233 (-54.49%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Rlgraph
RLgraph: Modular computation graphs for deep reinforcement learning
Stars: ✭ 272 (-46.87%)
Mutual labels:  reinforcement-learning, neural-networks, deep-reinforcement-learning
Drq
DrQ: Data regularized Q
Stars: ✭ 268 (-47.66%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Explorer
Explorer is a PyTorch reinforcement learning framework for exploring new ideas.
Stars: ✭ 54 (-89.45%)
Mutual labels:  deep-reinforcement-learning, q-learning, actor-critic
Deep Reinforcement Learning
Repo for the Deep Reinforcement Learning Nanodegree program
Stars: ✭ 4,012 (+683.59%)
Mutual labels:  reinforcement-learning, neural-networks, deep-reinforcement-learning
Openai lab
An experimentation framework for Reinforcement Learning using OpenAI Gym, Tensorflow, and Keras.
Stars: ✭ 313 (-38.87%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Pytorch sac
PyTorch implementation of Soft Actor-Critic (SAC)
Stars: ✭ 174 (-66.02%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Gam
A PyTorch implementation of "Graph Classification Using Structural Attention" (KDD 2018).
Stars: ✭ 227 (-55.66%)
Mutual labels:  reinforcement-learning, neural-networks, deep-reinforcement-learning
2048 Deep Reinforcement Learning
Trained A Convolutional Neural Network To Play 2048 using Deep-Reinforcement Learning
Stars: ✭ 169 (-66.99%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, q-learning
Accel Brain Code
The purpose of this repository is to make prototypes as case study in the context of proof of concept(PoC) and research and development(R&D) that I have written in my website. The main research topics are Auto-Encoders in relation to the representation learning, the statistical machine learning for energy-based models, adversarial generation networks(GANs), Deep Reinforcement Learning such as Deep Q-Networks, semi-supervised learning, and neural network language model for natural language processing.
Stars: ✭ 166 (-67.58%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, q-learning
Reinforcementlearning Atarigame
Pytorch LSTM RNN for reinforcement learning to play Atari games from OpenAI Universe. We also use Google Deep Mind's Asynchronous Advantage Actor-Critic (A3C) Algorithm. This is much superior and efficient than DQN and obsoletes it. Can play on many games
Stars: ✭ 118 (-76.95%)
Mutual labels:  reinforcement-learning, deep-reinforcement-learning, actor-critic
Ml Agents
Unity Machine Learning Agents Toolkit
Stars: ✭ 12,134 (+2269.92%)
Mutual labels:  reinforcement-learning, neural-networks, deep-reinforcement-learning
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (-56.64%)
Mutual labels:  deep-reinforcement-learning, q-learning, actor-critic
Gdrl
Grokking Deep Reinforcement Learning
Stars: ✭ 304 (-40.62%)
Mutual labels:  reinforcement-learning, neural-networks, deep-reinforcement-learning

This repository contains the code and pdf of a series of blog post called "dissecting reinforcement learning" which I published on my blog mpatacchiola.io/blog. Moreover there are links to resources that can be useful for a reinforcement learning practitioner. If you have some good references which may be of interest please send me a pull request and I will integrate them in the README.

The source code is contained in src with the name of the subfolders following the post number. In pdf there are the A3 documents of each post for offline reading. In images there are the raw svg file containing the images used in each post.

Installation

The source code does not require any particular installation procedure. The code can be used in Linux, Windows, OS X, and embedded devices like Raspberry Pi, BeagleBone, and Intel Edison. The only requirement is Numpy which is already present in Linux and can be easily installed in Windows and OS X through Anaconda or Miniconda. Some examples require Matplotlib for data visualization and animations.

Posts Content

  1. [Post one] [code] [pdf] - Markov chains. Markov Decision Process. Bellman Equation. Value and Policy iteration algorithms.

  2. [Post two] [code] [pdf] - Monte Carlo methods for prediction and control. Generalised Policy Iteration. Action Values and Q-function.

  3. [Post three] [code] [pdf] - Temporal Differencing Learning, Animal Learning, TD(0), TD(λ) and Eligibility Traces, SARSA, Q-Learning.

  4. [Post four] [code] [pdf] - Neurobiology behind Actor-Critic methods, computational Actor-Critic methods, Actor-only and Critic-only methods.

  5. [Post five] [code] [pdf] - Evolutionary Algorithms introduction, Genetic Algorithm in Reinforcement Learning, Genetic Algorithms for policy selection.

  6. [Post six] [code] [pdf] - Reinforcement learning applications, Multi-Armed Bandit, Mountain Car, Inverted Pendulum, Drone landing, Hard problems.

  7. [Post seven] [code] [pdf] - Function approximation, Intuition, Linear approximator, Applications, High-order approximators.

  8. [Post eight] [code] [pdf] - Non-linear function approximation, Perceptron, Multi Layer Perceptron, Applications, Policy Gradient.

Environments

The folder called environments contains all the environments used in the series. Differently from other libraries (such as OpenAI Gym) the environments are stand-alone python files that do no require any installation procedure. You can use an environment copying the file in the same folder of your project, and then loading it from a Python script: from environmentname import EnvironmentName. The environment can be used following the same convention adopted by OpenAI Gym:

from random import randint #to generate random integers
from inverted_pendulum import InvertedPendulum #importing the environment

#Generating the environment
env = InvertedPendulum(pole_mass=2.0, cart_mass=8.0, pole_lenght=0.5, delta_t=0.1)
#Reset the environment before the episode starts
observation = env.reset(exploring_starts=True) 

for step in range(100):
    action = randint(0, 2) #generate a random integer/action
    observation, reward, done = env.step(action) #one step in the environment
    if done == True: break #exit if the episode is finished

#Saving the episode in a GIF
env.render(file_path='./inverted_pendulum.gif', mode='gif')

The snippet above generate an inverted pendulum environment. The pole is controlled through three actions (0=left, 1=noop, 2=right) that are randomly generated through the randint() method. The maximum number of steps allowed is 100, that with delta_t=0.1 corresponds to 10 seconds. The episode can finish in advance if the pole falls down leading to done = True. Examples for each environments are available here. The following is a description of the available environments with a direct link to the python code:

  • grid world: a simple grid-world which includes obstacles, walls, positive and negative rewards. An agent can move in the environment using four actions (0=forward, 1=right, 2=backward, 3=left). It is possible to setup the dimension of the world, the location of the obstacles, and the movement noise [code]

  • multi-armed bandit: implementation of a multi-armed environment that can be initialized with a specific number of arms. Rewards are binary (1 or 0) and are given for each arm with a pre-defined probability. This world does not have a reset() method because for definition the episode only has a single step [code]

  • inverted pendulum: it is an implementation of the classic problem widely used in control theory. The pendulum can be initialized with a specific pole mass, cart mass, and pole length. There are three possible actions (0=left, 1=noop, 2=right). A method called render() allows saving a GIF or an MP4 of the last episode using Matplotlib [code]

  • mountain car: implementation of the classic problem which is a widely used benchmark. The environment is initialized with a specific mass for the car, friction for the soil, and delta time. There are only three actions available (0=left, 1=noop, 2=right). Rendering is possible and allows saving a GIF or video using Matplotlib animations [code]

  • drone landing: a drone has to land on a pad at the centre of a cubic room, moving in six possible directions (0=forward, 1=left, 2=backward, 3=right, 4=up, 5=down). The dimension of the world can be declared during the initialization. Positive reward of +1 is obtained if the drone touch the pad, whereas a negative reward of -1 is given in case of a wrong landing. Rendering is allowed and the file is stored as GIF or MP4 [code]

Resources

Software:

  • [Google DeepMind Lab] [github] - DeepMind Lab is a fully 3D game-like platform tailored for agent-based AI research.

  • [OpenAI Gym] [github] - A toolkit for developing and comparing reinforcement learning algorithms.

  • [OpenAI Universe] [github] - Measurement and training for artificial intelligence.

  • [RL toolkit] - Collection of utilities and demos developed by the RLAI group which may be useful for anyone trying to learn, teach or use reinforcement learning (by Richard Sutton).

  • [setosa blog] - A useful visual explanation of Markov chains.

  • [Tensorflow playground] - Try different MLP architectures and datasets on the browser.

Books and Articles:

  • Artificial intelligence: a modern approach. (chapters 17 and 21) Russell, S. J., Norvig, P., Canny, J. F., Malik, J. M., & Edwards, D. D. (2003). Upper Saddle River: Prentice hall. [web] [github]

  • Christopher Watkins doctoral dissertation, which introduced the Q-learning for the first time [pdf]

  • Evolutionary Algorithms for Reinforcement Learning. Moriarty, D. E., Schultz, A. C., & Grefenstette, J. J. (1999). [pdf]

  • Machine Learning (chapter 13) Mitchell T. (1997) [web]

  • Reinforcement learning: An introduction. Sutton, R. S., & Barto, A. G. (1998). Cambridge: MIT press. [html]

  • Reinforcement learning: An introduction (second edition). Sutton, R. S., & Barto, A. G. (draft April 2018). [TODO]

  • Reinforcement Learning in a Nutshell. Heidrich-Meisner, V., Lauer, M., Igel, C., & Riedmiller, M. A. (2007) [pdf]

  • Statistical Reinforcement Learning: Modern Machine Learning Approaches, Sugiyama, M. (2015) [web]

License

The MIT License (MIT) Copyright (c) 2017 Massimiliano Patacchiola Website: http://mpatacchiola.github.io/blog

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].