All Projects → pekaalto → Sc2aibot

pekaalto / Sc2aibot

Implementing reinforcement-learning algorithms for pysc2 -environment

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Sc2aibot

Reinforcement Learning
Learn Deep Reinforcement Learning in 60 days! Lectures & Code in Python. Reinforcement Learning + Deep Learning
Stars: ✭ 3,329 (+3910.84%)
Mutual labels:  reinforcement-learning, ppo, deepmind
Pysc2 Examples
StarCraft II - pysc2 Deep Reinforcement Learning Examples
Stars: ✭ 722 (+769.88%)
Mutual labels:  reinforcement-learning, deepmind
Pytorch Rl
PyTorch implementation of Deep Reinforcement Learning: Policy Gradient methods (TRPO, PPO, A2C) and Generative Adversarial Imitation Learning (GAIL). Fast Fisher vector product TRPO.
Stars: ✭ 658 (+692.77%)
Mutual labels:  reinforcement-learning, ppo
Slm Lab
Modular Deep Reinforcement Learning framework in PyTorch. Companion library of the book "Foundations of Deep Reinforcement Learning".
Stars: ✭ 904 (+989.16%)
Mutual labels:  reinforcement-learning, ppo
Recurrent Environment Simulators
Deepmind Recurrent Environment Simulators paper implementation in tensorflow
Stars: ✭ 73 (-12.05%)
Mutual labels:  reinforcement-learning, deepmind
Hands On Reinforcement Learning With Python
Master Reinforcement and Deep Reinforcement Learning using OpenAI Gym and TensorFlow
Stars: ✭ 640 (+671.08%)
Mutual labels:  reinforcement-learning, ppo
Reinforcement Learning With Tensorflow
Simple Reinforcement learning tutorials, 莫烦Python 中文AI教学
Stars: ✭ 6,948 (+8271.08%)
Mutual labels:  reinforcement-learning, ppo
Deep Reinforcement Learning
Repo for the Deep Reinforcement Learning Nanodegree program
Stars: ✭ 4,012 (+4733.73%)
Mutual labels:  reinforcement-learning, ppo
Mujocounity
Reproducing MuJoCo benchmarks in a modern, commercial game /physics engine (Unity + PhysX).
Stars: ✭ 47 (-43.37%)
Mutual labels:  reinforcement-learning, deepmind
Learning2run
Our NIPS 2017: Learning to Run source code
Stars: ✭ 57 (-31.33%)
Mutual labels:  reinforcement-learning, ppo
Mario rl
Stars: ✭ 60 (-27.71%)
Mutual labels:  reinforcement-learning, ppo
Elegantrl
Lightweight, efficient and stable implementations of deep reinforcement learning algorithms using PyTorch.
Stars: ✭ 575 (+592.77%)
Mutual labels:  reinforcement-learning, ppo
Reaver
Reaver: Modular Deep Reinforcement Learning Framework. Focused on StarCraft II. Supports Gym, Atari, and MuJoCo.
Stars: ✭ 499 (+501.2%)
Mutual labels:  reinforcement-learning, deepmind
Super Mario Bros Ppo Pytorch
Proximal Policy Optimization (PPO) algorithm for Super Mario Bros
Stars: ✭ 649 (+681.93%)
Mutual labels:  reinforcement-learning, ppo
Autonomous Learning Library
A PyTorch library for building deep reinforcement learning agents.
Stars: ✭ 425 (+412.05%)
Mutual labels:  reinforcement-learning, ppo
Deeprl Tutorials
Contains high quality implementations of Deep Reinforcement Learning algorithms written in PyTorch
Stars: ✭ 748 (+801.2%)
Mutual labels:  reinforcement-learning, ppo
Dmc2gym
OpenAI Gym wrapper for the DeepMind Control Suite
Stars: ✭ 75 (-9.64%)
Mutual labels:  reinforcement-learning, deepmind
Pytorch Cpp Rl
PyTorch C++ Reinforcement Learning
Stars: ✭ 353 (+325.3%)
Mutual labels:  reinforcement-learning, ppo
Lagom
lagom: A PyTorch infrastructure for rapid prototyping of reinforcement learning algorithms.
Stars: ✭ 364 (+338.55%)
Mutual labels:  reinforcement-learning, ppo
Ml In Tf
Get started with Machine Learning in TensorFlow with a selection of good reads and implemented examples!
Stars: ✭ 45 (-45.78%)
Mutual labels:  reinforcement-learning, deepmind

  

  

Info

This project implements the FullyConv reinforcement learning agent for pysc2 as specified in https://deepmind.com/documents/110/sc2le.pdf.

It's possible to use

  • A2C, which is a synchronous version of A3C used in the deepmind paper
  • PPO (Proximal Policy Optimization)

Differences to the deepmind spec:

  • Use A2C or PPO instead of A3C
  • The non-spatial feature-vector is discarded here. (Probably because of this can't learn CollectMineralsAndGas)
  • There are some other minor simplifaction to the observation space
  • Use different hyper-parameters
  • For select rectangle we draw a rectangle of radius 5px around selected point (uncertain how deepmind does this)
  • Here we are not trying to learn any other function arguments except the spatial one
  • And maybe others that I don't know of

Results

Map Avg score A2C Avg score PPO Deepmind avg
MoveToBeacon 25 26 26
CollectMineralShards 91 100 103
DefeatZerglingsAndBanelings 48 ? 62
FindAndDefeatZerglings 42 45 45
DefeatRoaches 70-90 ? 100

Training graphs A2C:

Training graphs PPO:

  • Used the default parameters seen in the repo except:
    • DefeatRoaches, DefeatZerglinsAndBanelings entropy_weights 1e-4/1e-4, n_steps_per_batch 5
    • Number of envs 32 or 64
  • Deepmind scores from the FullyConv policy in the release paper are shown for comparison.
  • The model here wasn't able to learn CollectMineralsAndGas or BuildMarines

In DefeatRoaches and DefeatZerglingsAndBanelings the result is not stable. It took something like 5 runs the get the score for DefeatRoaches reported here. Also the scores for those are still considerably worse than Deepmind scores. Might be that at least hyperparameters here are off (and possibly other things).

Other environments seem more stable.

The training was done using one core of Tesla K80 -GPU per environment.

With PPO the scores were slightly better than A2C for tested environments. However, the training time was much longer with PPO than with A2C. Maybe some other PPO-parameters would give faster training time. With PPO the training seems more stable. The typical sigmoid shape in A2C-learning cureves doesn't appear.

Note:

The training is not deterministic and the training time might vary even if nothing is changed. For example, I tried to train MoveToBeacon 5 times with default parameters and 64 environments. Here are the episode numbers when agent achieved score of 27 first time

4674
3079
2355
1231
6358

How to run

python run_agent.py --map_name MoveToBeacon --model_name my_beacon_model --n_envs 32

This will save

  • tf summaries to _files/summaries/my_beacon_model/
  • model to _files/models/my_beacon_model

relative to the project path. By default using A2C. To run PPO specify --agent_mode ppo.

See run_agent.py for more arguments.

Requirements

  • Python 3 (will NOT work with python 2)
  • pysc2 (tested with v1.2)
  • Tensorflow (tested with 1.4.0)
  • Other standard python packages like numpy etc.

Code is tested with OS X and Linux. About Windows don't know. Let me know if there are issues.

References

I have borrowed some ideas from https://github.com/xhujoy/pysc2-agents (FullyConv-network etc.) and Open AI's baselines (A2C and PPO) but the implementation here is different from those. For parallel environments using the code from baselines adapted for sc2.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].