All Projects → johan-gras → Muzero

johan-gras / Muzero

A structured implementation of MuZero

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Muzero

Machin
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...
Stars: ✭ 145 (-7.05%)
Mutual labels:  reinforcement-learning
Study Reinforcement Learning
Studying Reinforcement Learning Guide
Stars: ✭ 147 (-5.77%)
Mutual labels:  reinforcement-learning
Tradzqai
Trading environnement for RL agents, backtesting and training.
Stars: ✭ 150 (-3.85%)
Mutual labels:  reinforcement-learning
Sumo Rl
A simple interface to instantiate Reinforcement Learning environments with SUMO for Traffic Signal Control. Compatible with Gym Env from OpenAI and MultiAgentEnv from RLlib.
Stars: ✭ 145 (-7.05%)
Mutual labels:  reinforcement-learning
Chess Alpha Zero
Chess reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 1,868 (+1097.44%)
Mutual labels:  reinforcement-learning
Minimalrl
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Stars: ✭ 2,051 (+1214.74%)
Mutual labels:  reinforcement-learning
Data Science Question Answer
A repo for data science related questions and answers
Stars: ✭ 2,000 (+1182.05%)
Mutual labels:  reinforcement-learning
Agents
TF-Agents: A reliable, scalable and easy to use TensorFlow library for Contextual Bandits and Reinforcement Learning.
Stars: ✭ 2,135 (+1268.59%)
Mutual labels:  reinforcement-learning
Rainbow
A PyTorch implementation of Rainbow DQN agent
Stars: ✭ 147 (-5.77%)
Mutual labels:  reinforcement-learning
Tensorflow rlre
Reinforcement Learning for Relation Classification from Noisy Data(TensorFlow)
Stars: ✭ 150 (-3.85%)
Mutual labels:  reinforcement-learning
Tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Stars: ✭ 11,865 (+7505.77%)
Mutual labels:  reinforcement-learning
Show Adapt And Tell
Code for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017
Stars: ✭ 146 (-6.41%)
Mutual labels:  reinforcement-learning
Doudizhu
AI斗地主
Stars: ✭ 149 (-4.49%)
Mutual labels:  reinforcement-learning
Articulations Robot Demo
Stars: ✭ 145 (-7.05%)
Mutual labels:  reinforcement-learning
Gym Fx
Forex trading simulator environment for OpenAI Gym, observations contain the order status, performance and timeseries loaded from a CSV file containing rates and indicators. Work In Progress
Stars: ✭ 151 (-3.21%)
Mutual labels:  reinforcement-learning
Allenact
An open source framework for research in Embodied-AI from AI2.
Stars: ✭ 144 (-7.69%)
Mutual labels:  reinforcement-learning
Open Quadruped
An open-source 3D-printed quadrupedal robot. Intuitive gait generation through 12-DOF Bezier Curves. Full 6-axis body pose manipulation. Custom 3DOF Leg Inverse Kinematics Model accounting for offsets.
Stars: ✭ 148 (-5.13%)
Mutual labels:  reinforcement-learning
Senseact
SenseAct: A computational framework for developing real-world robot learning tasks
Stars: ✭ 153 (-1.92%)
Mutual labels:  reinforcement-learning
Iccv2019 Learningtopaint
ICCV2019 - A painting AI that can reproduce paintings stroke by stroke using deep reinforcement learning.
Stars: ✭ 1,995 (+1178.85%)
Mutual labels:  reinforcement-learning
Energy Py
Reinforcement learning for energy systems
Stars: ✭ 148 (-5.13%)
Mutual labels:  reinforcement-learning

.. |copy| unicode:: 0xA9 .. |---| unicode:: U+02014

====== MuZero

This repository is a Python implementation of the MuZero algorithm. It is based upon the pre-print paper__ and the pseudocode__ describing the Muzero framework. Neural computations are implemented with Tensorflow.

You can easily train your own MuZero, more specifically for one player and non-image based environments (such as CartPole__). If you wish to train Muzero on other kinds of environments, this codebase can be used with slight modifications.

__ https://arxiv.org/abs/1911.08265 __ https://arxiv.org/src/1911.08265v1/anc/pseudocode.py __ https://gym.openai.com/envs/CartPole-v1/

DISCLAIMER: this code is early research code. What this means is:

  • Silent bugs may exist.
  • It may not work reliably on other environments or with other hyper-parameters.
  • The code quality and documentation are quite lacking, and much of the code might still feel "in-progress".
  • The training and testing pipeline is not very advanced.

Dependencies

We run this code using:

  • Conda 4.7.12
  • Python 3.7
  • Tensorflow 2.0.0
  • Numpy 1.17.3

Training your MuZero

This code must be run from the main function in muzero.py (don't forget to first configure your conda environment).

Training a Cartpole-v1 bot

To train a model, please follow these steps:

  1. Create or modify an existing configuration of Muzero in config.py.

  2. Call the right configuration inside the main of muzero.py.

  3. Run the main function: python muzero.py.

Training on an other environment

To train on a different environment than Cartpole-v1, please follow these additional steps:

  1. Create a class that extends AbstractGame, this class should implement the behavior of your environment. For instance, the CartPole class extends AbstractGame and works as a wrapper upon gym CartPole-v1__. You can use the CartPole class as a template for any gym environment.

__ https://gym.openai.com/envs/CartPole-v1/

  1. This step is optional (only if you want to use a different kind of network architecture or value/reward transform). Create a class that extends BaseNetwork, this class should implement the different networks (representation, value, policy, reward and dynamic) and value/reward transforms. For instance, the CartPoleNetwork class extends BaseNetwork and implements fully connected networks.

  2. This step is optional (only if you use a different value/reward transform). You should implement the corresponding inverse value/reward transform by modifying the loss_value and loss_reward function inside training.py.

Differences from the paper

This implementation differ from the original paper in the following manners:

  • We use fully connected layers instead of convolutional ones. This is due to the nature of our environment (Cartpole-v1) which as no spatial correlation in the observation vector.
  • We don't scale the hidden state between 0 and 1 using min-max normalization. Instead we use a tanh function that maps any values in a range between -1 and 1.
  • We do use a slightly simple invertible transform for the value prediction by removing the linear term.
  • During training, samples are drawn from a uniform distribution instead of using prioritized replay.
  • We also scale the loss of each head by 1/K (with K the number of unrolled steps). But, instead we consider that K is always constant (even if it is not always true).
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].