All Projects → pytorch → Elf

pytorch / Elf

Licence: other
ELF: a platform for game research with AlphaGoZero/AlphaZero reimplementation

Programming Languages

C++
36643 projects - #6 most used programming language
python
139335 projects - #7 most used programming language
c
50402 projects - #5 most used programming language
shell
77523 projects
CMake
9771 projects
Dockerfile
14818 projects
Makefile
30231 projects

Projects that are alternatives of or similar to Elf

Alphazero gomoku
An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)
Stars: ✭ 2,570 (-20.68%)
Mutual labels:  reinforcement-learning, rl, alphago-zero
Alpha Zero General
A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
Stars: ✭ 2,617 (-19.23%)
Mutual labels:  reinforcement-learning, alphago-zero, alpha-zero
Rl Tutorial Jnrr19
Stable-Baselines tutorial for Journées Nationales de la Recherche en Robotique 2019
Stars: ✭ 204 (-93.7%)
Mutual labels:  reinforcement-learning, rl
Pytorch Drl
PyTorch implementations of various Deep Reinforcement Learning (DRL) algorithms for both single agent and multi-agent.
Stars: ✭ 233 (-92.81%)
Mutual labels:  reinforcement-learning, rl
Learning To Communicate Pytorch
Learning to Communicate with Deep Multi-Agent Reinforcement Learning in PyTorch
Stars: ✭ 236 (-92.72%)
Mutual labels:  reinforcement-learning, rl
Atari
AI research environment for the Atari 2600 games 🤖.
Stars: ✭ 174 (-94.63%)
Mutual labels:  reinforcement-learning, rl
Rl trading
An environment to high-frequency trading agents under reinforcement learning
Stars: ✭ 205 (-93.67%)
Mutual labels:  reinforcement-learning, rl
Gymfc
A universal flight control tuning framework
Stars: ✭ 210 (-93.52%)
Mutual labels:  reinforcement-learning, rl
RL-code-resources
A collection of Reinforcement Learning GitHub code resources divided by frameworks and environments
Stars: ✭ 51 (-98.43%)
Mutual labels:  rl, rl-environment
alpha-zero
AlphaZero implementation for Othello, Connect-Four and Tic-Tac-Toe based on "Mastering the game of Go without human knowledge" and "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by DeepMind.
Stars: ✭ 68 (-97.9%)
Mutual labels:  alphago-zero, alpha-zero
Gym Gazebo2
gym-gazebo2 is a toolkit for developing and comparing reinforcement learning algorithms using ROS 2 and Gazebo
Stars: ✭ 257 (-92.07%)
Mutual labels:  reinforcement-learning, rl
Coach
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms
Stars: ✭ 2,085 (-35.65%)
Mutual labels:  reinforcement-learning, rl
Rl Baselines3 Zoo
A collection of pre-trained RL agents using Stable Baselines3, training and hyperparameter optimization included.
Stars: ✭ 161 (-95.03%)
Mutual labels:  reinforcement-learning, rl
Chess Alpha Zero
Chess reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 1,868 (-42.35%)
Mutual labels:  reinforcement-learning, alphago-zero
Rad
RAD: Reinforcement Learning with Augmented Data
Stars: ✭ 268 (-91.73%)
Mutual labels:  reinforcement-learning, rl
Cherry
A PyTorch Library for Reinforcement Learning Research
Stars: ✭ 143 (-95.59%)
Mutual labels:  reinforcement-learning, rl
Pytorch Rl
Tutorials for reinforcement learning in PyTorch and Gym by implementing a few of the popular algorithms. [IN PROGRESS]
Stars: ✭ 121 (-96.27%)
Mutual labels:  reinforcement-learning, rl
Reinforcement learning
Implementation of selected reinforcement learning algorithms in Tensorflow. A3C, DDPG, REINFORCE, DQN, etc.
Stars: ✭ 132 (-95.93%)
Mutual labels:  reinforcement-learning, rl
Corailed
Unrailed! simulator using C++ with some reinforcement learning and Unrailed! AI using Python with OpenCV
Stars: ✭ 15 (-99.54%)
Mutual labels:  rl, rl-environment
Matterport3dsimulator
AI Research Platform for Reinforcement Learning from Real Panoramic Images.
Stars: ✭ 260 (-91.98%)
Mutual labels:  reinforcement-learning, rl

ELF

ELF is an Extensive, Lightweight, and Flexible platform for game research. We have used it to build our Go playing bot, ELF OpenGo, which achieved a 14-0 record versus four global top-30 players in April 2018. The final score is 20-0 (each professional Go player plays 5 games).

Please refer to our website for a full overview of ELF OpenGo-related resources, including pretrained models, numerous datasets, and a comprehensive visualization of human Go games throughout history leveraging ELF OpenGo's analysis capabilities.

This version is a successor to the original ELF platform.

DISCLAIMER: this code is early research code. What this means is:

  • It may not work reliably (or at all) on your system.
  • The code quality and documentation are quite lacking, and much of the code might still feel "in-progress".
  • There are quite a few hacks made specifically for our systems and infrastructure.

build

License

ELF is released under the BSD-style licence found in the LICENSE file.

Citing ELF

If you use ELF in your research, please consider citing the original NIPS paper as follows:

@inproceedings{tian2017elf,
  author = {Yuandong Tian and Qucheng Gong and Wenling Shang and Yuxin Wu and C. Lawrence Zitnick},
  title = {ELF: An extensive, lightweight and flexible research platform for real-time strategy games},
  booktitle = {Advances in Neural Information Processing Systems},
  pages = {2656--2666},
  year = {2017}
}

If you use ELF OpenGo or OpenGo-like functionality, please consider citing the technical report as follows:

@inproceedings{tian2019opengo,
  author    = {Yuandong Tian and
               Jerry Ma and
               Qucheng Gong and
               Shubho Sengupta and
               Zhuoyuan Chen and
               James Pinkerton and
               Larry Zitnick},
  title     = {{ELF} OpenGo: an analysis and open reimplementation of AlphaZero},
  booktitle = {Proceedings of the 36th International Conference on Machine Learning,
               {ICML} 2019, 9-15 June 2019, Long Beach, California, {USA}},
  pages     = {6244--6253},
  year      = {2019},
  url       = {http://proceedings.mlr.press/v97/tian19a.html}
}

* Jerry Ma, Qucheng Gong, and Shubho Sengupta contributed equally.

** We also thank Yuxin Wu for his help on this project.

Dependencies

We run ELF using:

  • Ubuntu 18.04
  • Python 3.7
  • GCC 7.3
  • CUDA 10.0
  • CUDNN 7.3
  • NCCL 2.1.2

At the moment, this is the only supported environment. Other environments may also work, but we unfortunately do not have the manpower to investigate compatibility issues.

Here are the dependency installation commands for Ubuntu 18.04 and conda:

sudo apt-get install cmake g++ gcc libboost-all-dev libzmq3-dev
conda install numpy zeromq pyzmq

# From the project root
git submodule sync && git submodule update --init --recursive

You also need to install PyTorch 1.0.0 or later:

conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

A Dockerfile has been provided if you wish to build ELF using Docker.

Building

cd to the project root and run make to build.

Testing

After building, cd to the project root and run make test to test.

Using ELF

Currently, ELF must be run straight from source. You'll need to run source scripts/devmode_set_pythonpath.sh to augment $PYTHONPATH appropriately.

Training a Go bot

To train a model, please follow these steps:

  1. Build ELF and run source scripts/devmode_set_pythonpath.sh as described above.
  2. Change directory to scripts/elfgames/go/
  3. Edit server_addrs.py to specify the server's IP address. This is the machine that will train the neural network.
  4. Create the directory where the server will write the model directory. This defaults to myserver
  5. Run start_server.sh to start the server. We have tested this on a machine with 8 GPUs.
  6. Run start_client.sh to start the clients. The clients should be able to read the model written by the server, so the clients and the server need to mount the same directory via NFS. We have tested this on 2000 clients, each running exclusively on one GPU.

Running a Go bot

Here is a basic set of commands to run and play the bot via the GTP protocol:

  1. Build ELF and run source scripts/devmode_set_pythonpath.sh as described above.
  2. Train a model, or grab a pretrained model.
  3. Change directory to scripts/elfgames/go/
  4. Run ./gtp.sh path/to/modelfile.bin --verbose --gpu 0 --num_block 20 --dim 256 --mcts_puct 1.50 --batchsize 16 --mcts_rollout_per_batch 16 --mcts_threads 2 --mcts_rollout_per_thread 8192 --resign_thres 0.05 --mcts_virtual_loss 1

We've found that the above settings work well for playing the bot. You may change mcts_rollout_per_thread to tune the thinking time per move.

After the environment is set up and the model is loaded, you can start to type gtp commands to get the response from the engine.

Analysis mode

Here is the command to analyze an existing sgf file:

  1. Build ELF and run source scripts/devmode_set_pythonpath.sh as described above.
  2. Train a model, or grab a pretrained model.
  3. Change directory to scripts/elfgames/go/
  4. Run ./analysis.sh /path/to/model --preload_sgf /path/to/sgf --preload_sgf_move_to [move_number] --dump_record_prefix [tree] --verbose --gpu 0 --mcts_puct 1.50 --batchsize 16 --mcts_rollout_per_batch 16 --mcts_threads 2 --mcts_rollout_per_thread 8192 --resign_thres 0.0 --mcts_virtual_loss 1 --num_games 1

The settings for rollouts are similar as above. The process should run automatically after loading the environment, models and previous moves. You should see the move suggested by the AI after each move, along with its value and prior. This process will also generate a lot of tree files, prefixed with tree (you can change it with --dump_record_prefix option above.) The tree files will contain the full search at each move along with its prior and value. To abort the process simply kill it as the current implementation will run it to the end of the game.

Ladder tests

We provide a collection of just over 100 ladder scenarios in the ladder_suite/ directory.


Copyright © 2018-present, Facebook, Inc. — all rights reserved.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].