All Projects → vincentberaud → Minecraft Reinforcement Learning

vincentberaud / Minecraft Reinforcement Learning

Deep Recurrent Q-Learning vs Deep Q Learning on a simple Partially Observable Markov Decision Process with Minecraft

Projects that are alternatives of or similar to Minecraft Reinforcement Learning

Deep Reinforcement Learning Algorithms
31 projects in the framework of Deep Reinforcement Learning algorithms: Q-learning, DQN, PPO, DDPG, TD3, SAC, A2C and others. Each project is provided with a detailed training log.
Stars: ✭ 167 (+406.06%)
Mutual labels:  jupyter-notebook, deep-reinforcement-learning, dqn
Introtodeeplearning
Lab Materials for MIT 6.S191: Introduction to Deep Learning
Stars: ✭ 4,955 (+14915.15%)
Mutual labels:  jupyter-notebook, deeplearning, deep-reinforcement-learning
Reinforcement Learning
Learn Deep Reinforcement Learning in 60 days! Lectures & Code in Python. Reinforcement Learning + Deep Learning
Stars: ✭ 3,329 (+9987.88%)
Mutual labels:  jupyter-notebook, deep-reinforcement-learning, dqn
Mit Deep Learning
Tutorials, assignments, and competitions for MIT Deep Learning related courses.
Stars: ✭ 8,912 (+26906.06%)
Mutual labels:  jupyter-notebook, deeplearning, deep-reinforcement-learning
Deep Reinforcement Learning
Repo for the Deep Reinforcement Learning Nanodegree program
Stars: ✭ 4,012 (+12057.58%)
Mutual labels:  jupyter-notebook, deep-reinforcement-learning, dqn
Deep Reinforcement Learning For Automated Stock Trading Ensemble Strategy Icaif 2020
Deep Reinforcement Learning for Automated Stock Trading: An Ensemble Strategy. ICAIF 2020. Please star.
Stars: ✭ 518 (+1469.7%)
Mutual labels:  jupyter-notebook, deep-reinforcement-learning
Deeplearning
深度学习入门教程, 优秀文章, Deep Learning Tutorial
Stars: ✭ 6,783 (+20454.55%)
Mutual labels:  jupyter-notebook, deeplearning
Deeplearning Assignment
深度学习笔记
Stars: ✭ 619 (+1775.76%)
Mutual labels:  jupyter-notebook, deeplearning
Ai Series
📚 [.md & .ipynb] Series of Artificial Intelligence & Deep Learning, including Mathematics Fundamentals, Python Practices, NLP Application, etc. 💫 人工智能与深度学习实战,数理统计篇 | 机器学习篇 | 深度学习篇 | 自然语言处理篇 | 工具实践 Scikit & Tensoflow & PyTorch 篇 | 行业应用 & 课程笔记
Stars: ✭ 702 (+2027.27%)
Mutual labels:  jupyter-notebook, deeplearning
Additive Margin Softmax
This is the implementation of paper <Additive Margin Softmax for Face Verification>
Stars: ✭ 464 (+1306.06%)
Mutual labels:  jupyter-notebook, deeplearning
Hands On Reinforcement Learning With Python
Master Reinforcement and Deep Reinforcement Learning using OpenAI Gym and TensorFlow
Stars: ✭ 640 (+1839.39%)
Mutual labels:  jupyter-notebook, deep-reinforcement-learning
Pytorch Rl
Deep Reinforcement Learning with pytorch & visdom
Stars: ✭ 745 (+2157.58%)
Mutual labels:  deep-reinforcement-learning, dqn
Monk v1
Monk is a low code Deep Learning tool and a unified wrapper for Computer Vision.
Stars: ✭ 480 (+1354.55%)
Mutual labels:  jupyter-notebook, deeplearning
Practical rl
A course in reinforcement learning in the wild
Stars: ✭ 4,741 (+14266.67%)
Mutual labels:  jupyter-notebook, deep-reinforcement-learning
Elegantrl
Lightweight, efficient and stable implementations of deep reinforcement learning algorithms using PyTorch.
Stars: ✭ 575 (+1642.42%)
Mutual labels:  deep-reinforcement-learning, dqn
Deeplearningmugenknock
でぃーぷらーにんぐを無限にやってディープラーニングでDeepLearningするための実装CheatSheet
Stars: ✭ 684 (+1972.73%)
Mutual labels:  jupyter-notebook, deeplearning
Slm Lab
Modular Deep Reinforcement Learning framework in PyTorch. Companion library of the book "Foundations of Deep Reinforcement Learning".
Stars: ✭ 904 (+2639.39%)
Mutual labels:  deep-reinforcement-learning, dqn
Basic reinforcement learning
An introductory series to Reinforcement Learning (RL) with comprehensive step-by-step tutorials.
Stars: ✭ 826 (+2403.03%)
Mutual labels:  jupyter-notebook, deeplearning
Concise Ipython Notebooks For Deep Learning
Ipython Notebooks for solving problems like classification, segmentation, generation using latest Deep learning algorithms on different publicly available text and image data-sets.
Stars: ✭ 23 (-30.3%)
Mutual labels:  jupyter-notebook, deeplearning
Servenet
Service Classification based on Service Description
Stars: ✭ 21 (-36.36%)
Mutual labels:  jupyter-notebook, deeplearning

Minecraft-Reinforcement-Learning

We here compare Deep Recurrent Q-Learning and Deep Q-Learning on two simple missions in a Partially Observable Markov Decision Process (POMDP) based on Minecraft environment. We use gym-minecraft which allows the use of the MalmoProject with an OpenAI like API.

Our work is in the notebook DRQN_vs_DQN_minecraft.ipynb.

Our paper can be found here.

Work realised in collaboration with :

Prerequisites

  • Python 3.6
  • Jupyter
  • Tensorflow

Installation

  • You need to install Malmö
  • You can then install gym-minecraft
  • You can find in the folder "envs" :
    • The slightly modified version of gym-minecraft main code we used named minecraft.py. Put it in your_pip_folder/site-packages/gym_minecraft-0.0.2-py3.6.egg/gym-minecraft/envs/
  • The missions we used. Put them in your_pip_folder/site-packages/gym_minecraft-0.0.2-py3.6.egg/gym-minecraft/assets/

Models

You can choose between 3 models :

  • Simple DQN : Convolutional Neural Network with the current frame CNN architecure
  • DQN : Convolutional Neural Network with the last 4 frames StackedCNN architecure
  • DRQN : Convolutional Neural Network + LSTM layer DRQN architecure

DQN settings

  • Implementation of Double Q Learning
  • ε-greedy exploration
  • Experience replay iplementation

Note

Unlike Deepmind’s implementations of DQN for Atari games, Minecraft has the constraint that the game isn’t in pause during two actions ordered by the agent. Accordingly the agent and the network have to be as fast as needed to play in the range of time fixed in the environment.

Credits

We would like to thank Arthur Juliani for all his work and medium articles. Tambet Matiisen for his nice implementation of Gym-Minecraft.

References

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].