All Projects → microsoft → logrl

microsoft / logrl

Licence: other
Logarithmic Reinforcement Learning

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to logrl

Ros2learn
ROS 2 enabled Machine Learning algorithms
Stars: ✭ 119 (+376%)
Mutual labels:  dqn, rl
Mushroom Rl
Python library for Reinforcement Learning.
Stars: ✭ 442 (+1668%)
Mutual labels:  dqn, rl
Learning To Communicate Pytorch
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch
Stars: ✭ 236 (+844%)
Mutual labels:  dqn, rl
Dqn Flappybird
Play flappy bird with DQN, a demo for reinforcement learning, implemented using PyTorch
Stars: ✭ 37 (+48%)
Mutual labels:  dqn, rl
Atari
AI research environment for the Atari 2600 games 🤖.
Stars: ✭ 174 (+596%)
Mutual labels:  dqn, rl
Pytorch Drl
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single agent and multi-agent.
Stars: ✭ 233 (+832%)
Mutual labels:  dqn, rl
Tianshou
An elegant PyTorch deep reinforcement learning library.
Stars: ✭ 4,109 (+16336%)
Mutual labels:  dqn, rl
reinforcement learning with Tensorflow
Minimal implementations of reinforcement learning algorithms by Tensorflow
Stars: ✭ 28 (+12%)
Mutual labels:  dqn
gym-rs
OpenAI's Gym written in pure Rust for blazingly fast performance
Stars: ✭ 34 (+36%)
Mutual labels:  rl
omd
JAX code for the paper "Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation"
Stars: ✭ 43 (+72%)
Mutual labels:  dqn
king-pong
Deep Reinforcement Learning Pong Agent, King Pong, he's the best
Stars: ✭ 23 (-8%)
Mutual labels:  dqn
pytorch-distributed
Ape-X DQN & DDPG with pytorch & tensorboard
Stars: ✭ 98 (+292%)
Mutual labels:  dqn
revisiting rainbow
Revisiting Rainbow
Stars: ✭ 71 (+184%)
Mutual labels:  rl
breakout-Deep-Q-Network
Reinforcement Learning | tensorflow implementation of DQN, Dueling DQN and Double DQN performed on Atari Breakout
Stars: ✭ 69 (+176%)
Mutual labels:  dqn
Pytorch-PCGrad
Pytorch reimplementation for "Gradient Surgery for Multi-Task Learning"
Stars: ✭ 179 (+616%)
Mutual labels:  rl
DQN-using-PyTorch-and-ML-Agents
A simple example of how to implement vector based DQN using PyTorch and a ML-Agents environment
Stars: ✭ 81 (+224%)
Mutual labels:  dqn
fresh-coupons
Buy paid Udemy courses for free/with discount without extra step! See all discounted courses right in Udemy and apply coupon code automatically or with a single click.
Stars: ✭ 24 (-4%)
Mutual labels:  discount
AI BIG DATAS ALGORITHM
大数据+人工智能+数据结构相关案例项目
Stars: ✭ 28 (+12%)
Mutual labels:  dqn
Rainy
☔ Deep RL agents with PyTorch☔
Stars: ✭ 39 (+56%)
Mutual labels:  dqn
dqn-obstacle-avoidance
Deep Reinforcement Learning for Fixed-Wing Flight Control with Deep Q-Network
Stars: ✭ 57 (+128%)
Mutual labels:  dqn

Logarithmic Reinforcement Learning

This repository hosts sample code for the NeurIPS 2019 paper: van Seijen, Fatemi, Tavakoli (2019).

We provide code for the linear experiments of the paper as well as the deep RL Atari 2600 examples (LogDQN).

LICENSE

Microsoft Open Source Code of Conduct

The code for LogDQN has been developed by Arash Tavakoli and the code for the linear experiments has been developed by Harm van Seijen.

Citing

If you use this research in your work, please cite the accompanying paper:

@inproceedings{vanseijen2019logrl,
  title={Using a Logarithmic Mapping to Enable Lower Discount Factors in Reinforcement Learning},
  author={van Seijen, Harm and
          Fatemi, Mehdi and 
          Tavakoli, Arash},
  booktitle={Advances in Neural Information Processing Systems},
  year={2019}
}

Linear Experiments

First navigate to linear_experiments folder.

To create result-files:

python main

To visualize result-files:

python show_results

With the default settings (i.e., keeping main.py unchanged), a scan over different gamma values is performed for a tile-width of 2 for a version of Q-learning without a logarithmic mapping.

All experimental settings can be found at the top of the main.py file. To run the logarithmic-mapping version of Q-learning, set:

agent_settings['log_mapping'] = True

Results of the full scans are provided. To visualize these results for regular Q-learning or logarithmic Q-learning, set filename in show_results.py to full_scan_reg or full_scan_log, respectively.


Logarithmic Deep Q-Network (LogDQN)

This part presents an implementation of LogDQN from van Seijen, Fatemi, Tavakoli (2019).

Instructions

Our implementation of LogDQN builds on Dopamine (Castro et al., 2018), a Tensorflow-based research framework for fast prototyping of reinforcement learning algorithms.

Follow the instructions below to install the LogDQN package along with a compatible version of Dopamine and their dependencies inside a conda environment.

First install Anaconda, and then proceed below.

conda create --name log-env python=3.6 
conda activate log-env

Ubuntu

sudo apt-get update && sudo apt-get install cmake zlib1g-dev
pip install absl-py atari-py gin-config gym opencv-python tensorflow==1.15rc3
pip install git+git://github.com/google/dopamine.git@a59d5d6c68b1a6e790d5808c550ae0f51d3e85ce

Finally, navigate to log_dqn_experiments and install the LogDQN package from source.

cd log_dqn_experiments
pip install .

Training an agent

To run a LogDQN agent,

python -um log_dqn.train_atari \
    --agent_name=log_dqn \
    --base_dir=/tmp/log_dqn \
    --gin_files='log_dqn/log_dqn.gin' \
    --gin_bindings="Runner.game_name = \"Asterix\"" \
    --gin_bindings="LogDQNAgent.tf_device=\"/gpu:0\""

You can set LogDQNAgent.tf_device to /cpu:* for a non-GPU version.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].