All Projects → JuliaReinforcementLearning → Reinforcementlearninganintroduction.jl

JuliaReinforcementLearning / Reinforcementlearninganintroduction.jl

Licence: mit
Julia code for the book Reinforcement Learning An Introduction

Programming Languages

julia
2034 projects

Projects that are alternatives of or similar to Reinforcementlearninganintroduction.jl

Aws Robomaker Sample Application Deepracer
Use AWS RoboMaker and demonstrate running a simulation which trains a reinforcement learning (RL) model to drive a car around a track
Stars: ✭ 105 (-10.26%)
Mutual labels:  reinforcement-learning
Handful Of Trials Pytorch
Unofficial Pytorch code for "Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models"
Stars: ✭ 112 (-4.27%)
Mutual labels:  reinforcement-learning
Coursera reinforcement learning
Coursera Reinforcement Learning Specialization by University of Alberta & Alberta Machine Intelligence Institute
Stars: ✭ 114 (-2.56%)
Mutual labels:  reinforcement-learning
Lang Emerge Parlai
Implementation of EMNLP 2017 Paper "Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog" using PyTorch and ParlAI
Stars: ✭ 106 (-9.4%)
Mutual labels:  reinforcement-learning
Pairstrade Fyp 2019
We tested 3 approaches for Pair Trading: distance, cointegration and reinforcement learning approach.
Stars: ✭ 109 (-6.84%)
Mutual labels:  reinforcement-learning
Studybook
Study E-Book(ComputerVision DeepLearning MachineLearning Math NLP Python ReinforcementLearning)
Stars: ✭ 1,457 (+1145.3%)
Mutual labels:  reinforcement-learning
Reinforcement Learning
🤖 Implements of Reinforcement Learning algorithms.
Stars: ✭ 104 (-11.11%)
Mutual labels:  reinforcement-learning
C51 Ddqn Keras
C51-DDQN in Keras
Stars: ✭ 115 (-1.71%)
Mutual labels:  reinforcement-learning
Navbot
Using RGB Image as Visual Input for Mapless Robot Navigation
Stars: ✭ 111 (-5.13%)
Mutual labels:  reinforcement-learning
Deep Neuroevolution
Deep Neuroevolution
Stars: ✭ 1,526 (+1204.27%)
Mutual labels:  reinforcement-learning
Cartpole
OpenAI's cartpole env solver.
Stars: ✭ 107 (-8.55%)
Mutual labels:  reinforcement-learning
Numpy Ml
Machine learning, in numpy
Stars: ✭ 11,100 (+9387.18%)
Mutual labels:  reinforcement-learning
Startcraft pysc2 minigames
Startcraft II Machine Learning research with DeepMind pysc2 python library .mini-games and agents.
Stars: ✭ 113 (-3.42%)
Mutual labels:  reinforcement-learning
Easy Rl
强化学习中文教程,在线阅读地址:https://datawhalechina.github.io/easy-rl/
Stars: ✭ 3,004 (+2467.52%)
Mutual labels:  reinforcement-learning
Stable Baselines
Mirror of Stable-Baselines: a fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Stars: ✭ 115 (-1.71%)
Mutual labels:  reinforcement-learning
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+1094.02%)
Mutual labels:  reinforcement-learning
Ctc Executioner
Master Thesis: Limit order placement with Reinforcement Learning
Stars: ✭ 112 (-4.27%)
Mutual labels:  reinforcement-learning
Reinforcement Learning An Introduction
Python Implementation of Reinforcement Learning: An Introduction
Stars: ✭ 11,042 (+9337.61%)
Mutual labels:  reinforcement-learning
Hierarchical Actor Critic Hac Pytorch
PyTorch implementation of Hierarchical Actor Critic (HAC) for OpenAI gym environments
Stars: ✭ 116 (-0.85%)
Mutual labels:  reinforcement-learning
Doom Net Pytorch
Reinforcement learning models in ViZDoom environment
Stars: ✭ 113 (-3.42%)
Mutual labels:  reinforcement-learning
RLIntro2Cover-min.jpg

"To think is to forget a difference, to generalize, to abstract."

Jorge Luis Borges, Funes the Memorious


This project provides the Julia code to generate figures in the book Reinforcement Learning: An Introduction(2nd). One of our main goals is to help users understand the basic concepts of reinforcement learning from an engineer's perspective. Once you have grasped how different components are organized, you're ready to explore a wide variety of modern deep reinforcement learning algorithms in ReinforcementLearningZoo.jl.

How to use?

Play Interactively

For experienced users with the latest stable Julia properly installed:

  1. Clone this project.
  2. Start the Julia REPL under the folder you created above.
  3. Install Pluto.jl
  4. ] add Pluto
  5. using Pluto
  6. Pluto.run()
  7. Now you can see the Pluto page is opened in your browser. Paste notebooks/Chapter01_Tic_Tac_Toe.jl (or any other file under the notebooks folder) into the input box and click the Open button.

Preview Notebooks

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].