All Projects → Kaixhin → Atari

Kaixhin / Atari

Licence: mit
Persistent advantage learning dueling double DQN for the Arcade Learning Environment

Programming Languages

lua
6591 projects

Projects that are alternatives of or similar to Atari

Dist-A3C
Distributed A3C
Stars: ✭ 37 (-85.82%)
Mutual labels:  deep-reinforcement-learning
pytorchrl
Deep Reinforcement Learning algorithms implemented in PyTorch
Stars: ✭ 47 (-81.99%)
Mutual labels:  deep-reinforcement-learning
Meta-SAC
Auto-tune the Entropy Temperature of Soft Actor-Critic via Metagradient - 7th ICML AutoML workshop 2020
Stars: ✭ 19 (-92.72%)
Mutual labels:  deep-reinforcement-learning
Autonomous-Drifting
Autonomous Drifting using Reinforcement Learning
Stars: ✭ 83 (-68.2%)
Mutual labels:  deep-reinforcement-learning
rl implementations
No description or website provided.
Stars: ✭ 40 (-84.67%)
Mutual labels:  deep-reinforcement-learning
Deep-Reinforcement-Learning-Notebooks
This Repository contains a series of google colab notebooks which I created to help people dive into deep reinforcement learning.This notebooks contain both theory and implementation of different algorithms.
Stars: ✭ 15 (-94.25%)
Mutual labels:  deep-reinforcement-learning
reinforcement-learning-papers
My notes on reinforcement learning papers
Stars: ✭ 13 (-95.02%)
Mutual labels:  deep-reinforcement-learning
Planet
Deep Planning Network: Control from pixels by latent planning with learned dynamics
Stars: ✭ 257 (-1.53%)
Mutual labels:  deep-reinforcement-learning
Deep RL with pytorch
A pytorch tutorial for DRL(Deep Reinforcement Learning)
Stars: ✭ 160 (-38.7%)
Mutual labels:  deep-reinforcement-learning
learn-hippo
Python (pytorch) code for Lu, Q., Hasson U. & Norman K. A. (2021). When to retrieve and encode episodic memories: a neural network model of hippocampal-cortical interaction.
Stars: ✭ 12 (-95.4%)
Mutual labels:  deep-reinforcement-learning
rlflow
A TensorFlow-based framework for learning about and experimenting with reinforcement learning algorithms
Stars: ✭ 20 (-92.34%)
Mutual labels:  deep-reinforcement-learning
l2r
Open-source reinforcement learning environment for autonomous racing.
Stars: ✭ 38 (-85.44%)
Mutual labels:  deep-reinforcement-learning
CommNet
an implementation of CommNet
Stars: ✭ 23 (-91.19%)
Mutual labels:  deep-reinforcement-learning
SRLF
Simple Reinforcement Learning Framework
Stars: ✭ 24 (-90.8%)
Mutual labels:  deep-reinforcement-learning
RCNN MDP
Code base for solving Markov Decision Processes and Reinforcement Learning problems using Recurrent Convolutional Neural Networks.
Stars: ✭ 65 (-75.1%)
Mutual labels:  deep-reinforcement-learning
DeepBeerInventory-RL
The code for the SRDQN algorithm to train an agent for the beer game problem
Stars: ✭ 27 (-89.66%)
Mutual labels:  deep-reinforcement-learning
reinforce-js
[INACTIVE] A collection of various machine learning solver. The library is an object-oriented approach (baked with Typescript) and tries to deliver simplified interfaces that make using the algorithms pretty simple.
Stars: ✭ 20 (-92.34%)
Mutual labels:  deep-reinforcement-learning
Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (-1.53%)
Mutual labels:  deep-reinforcement-learning
pysc2-rl-agents
StarCraft II / PySC2 Deep Reinforcement Learning Agents (A2C)
Stars: ✭ 124 (-52.49%)
Mutual labels:  deep-reinforcement-learning
SARNet
Code repository for SARNet: Learning Multi-Agent Communication through Structured Attentive Reasoning (NeurIPS 2020)
Stars: ✭ 14 (-94.64%)
Mutual labels:  deep-reinforcement-learning

Atari Space Invader

Build Status MIT License Gitter

Work In Progress: Crossed out items have been partially implemented.

Prioritised experience replay [1] persistent advantage learning [2] bootstrapped [3] dueling [4] double [5] deep recurrent [6] Q-network [7] for the Arcade Learning Environment [8] (and custom environments). Or PERPALB(triple-D)RQN for short...

Additional asynchronous agents [9]:

  • One-step Sarsa
  • One-step Q-learning
  • N-step Q-learning
  • Advantage actor-critic

Run th main.lua to run headless, or qlua main.lua to display the game. The main options are -game to choose the ROM (see the ROM directory for more details) and -mode as either train or eval. Can visualise saliency maps [10], optionally using guided [11] or "deconvnet" [12] backpropagation. Saliency map modes are applied at runtime so that they can be applied retrospectively to saved models.

To run experiments based on hyperparameters specified in the individual papers, use ./run.sh <paper> <game> <args>. <args> can be used to overwrite arguments specified earlier (in the script); for more details see the script itself. By default the code trains on a demo environment called Catch - use ./run.sh demo to run the demo with good default parameters. Note that this code uses CUDA if available, but the Catch network is small enough that it runs faster on CPU. If cuDNN is available, it can be enabled using -cudnn true; note that by default cuDNN is nondeterministic, and its deterministic modes are slower than cutorch.

In training mode if you want to quit using Ctrl+C then this will be caught and you will be asked if you would like to save the agent. Note that for non-asynchronous agents the experience replay memory will be included, totalling ~7GB. The main script also automatically saves the last weights (last.weights.t7) and the weights of the best performing DQN (according to the average validation score) (best.weights.t7).

In evaluation mode you can create recordings with -record true (requires FFmpeg); this does not require using qlua. Recordings will be stored in the videos directory.

Requirements

Requires Torch7, and can use CUDA and cuDNN if available. Also requires the following extra luarocks packages:

  • luaposix 33.4.0
  • luasocket
  • moses
  • logroll
  • classic
  • torchx
  • rnn
  • dpnn
  • nninit
  • tds
  • xitari
  • alewrap
  • rlenvs

xitari, alewrap and rlenvs can be installed using the following commands:

luarocks install https://raw.githubusercontent.com/lake4790k/xitari/master/xitari-0-0.rockspec
luarocks install https://raw.githubusercontent.com/Kaixhin/alewrap/master/alewrap-0-0.rockspec
luarocks install https://raw.githubusercontent.com/Kaixhin/rlenvs/master/rocks/rlenvs-scm-1.rockspec

Custom

You can use a custom environment (as the path to a Lua file/rlenvs-namespaced environment) using -env, as long as the class returned respects the rlenvs API. One restriction is that the state must be represented as a single tensor (with arbitrary dimensionality), and only a single discrete action must be returned. To prevent massive memory consumption for agents that use experience replay memory, states are discretised to integers ∈ [0, 255], assuming the state is comprised of reals ∈ [0, 1] - this can be disabled with -discretiseMem false. Visual environments can make use of explicit -height, -width and -colorSpace options to perform preprocessing for the network.

If the environment has separate behaviour during training and testing it should also implement training and evaluate methods - otherwise these will be added as empty methods during runtime. The environment can also implement a getDisplay method (with a mandatory getDisplaySpec method for determining screen size) which will be used for displaying the screen/computing saliency maps, where getDisplay must return a RGB (3D) tensor; this can also be utilised even if the state is not an image (although saliency can only be computed for states that are images). This must be implemented to have a visual display/computing saliency maps. The -zoom factor can be used to increase the size of small displays.

Environments are meant to be ephemeral, as an instance is created in order to first extract environment details (e.g. state representation), which will later be automatically garbage collected (not under the control of this code).

You can also use a custom model (body) with -modelBody, which replaces the usual DQN convolutional layers with a custom Torch neural network (as the path to a Lua file/models-namespaced environment). The class must include a createBody method which returns the custom neural network. The model will receive a stack of the previous states (as determined by -histLen), and must reshape them manually if needed. The DQN "heads" will then be constructed as normal, with -hiddenSize used to change the size of the fully connected layer if needed.

For an example on a GridWorld environment, run ./run.sh demo-grid - the demo also works with qlua and experience replay agents. The custom environment and network can be found in the examples folder.

Results

Single run results from various papers can be seen below. DQN-based agents use ε = 0.001 for evaluation [4, 5].

DQN (Space Invaders) [7]

DQN

Double DQN (Space Invaders) [5]

DDQN

Dueling DQN (Space Invaders) [4]

DuelingDQN

Persistent Advantage Learning DQN (Asterix) [2]

PALDQN

A3C (Beam Rider) [9]

A3C

Acknowledgements

  • @GeorgOstrovski for confirmation on network usage in advantage operators + note on interaction with Double DQN.
  • @schaul for clarifications on prioritised experience replay + dueling DQN hyperparameters.

Citation

If you find this library useful and would like to cite it, the following would be appropriate:

@misc{Atari,
  author = {Arulkumaran, Kai and Keri, Laszlo},
  title = {Kaixhin/Atari},
  url = {https://github.com/Kaixhin/Atari},
  year = {2015}
}

References

[1] Prioritized Experience Replay
[2] Increasing the Action Gap: New Operators for Reinforcement Learning
[3] Deep Exploration via Bootstrapped DQN
[4] Dueling Network Architectures for Deep Reinforcement Learning
[5] Deep Reinforcement Learning with Double Q-learning
[6] Deep Recurrent Q-Learning for Partially Observable MDPs
[7] Playing Atari with Deep Reinforcement Learning
[8] The Arcade Learning Environment: An Evaluation Platform for General Agents
[9] Asynchronous Methods for Deep Reinforcement Learning
[10] Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
[11] Striving for Simplicity: The All Convolutional Net
[12] Visualizing and Understanding Convolutional Networks

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].