All Projects → blanyal → alpha-zero

blanyal / alpha-zero

Licence: MIT License
AlphaZero implementation for Othello, Connect-Four and Tic-Tac-Toe based on "Mastering the game of Go without human knowledge" and "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by DeepMind.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to alpha-zero

alphazero
Board Game Reinforcement Learning using AlphaZero method. including Makhos (Thai Checkers), Reversi, Connect Four, Tic-tac-toe game rules
Stars: ✭ 24 (-64.71%)
Mutual labels:  tic-tac-toe, connect-four, reversi, othello, alphago-zero, alphazero
Alpha Zero General
A clean implementation based on AlphaZero for any game in any framework + tutorial + Othello/Gobang/TicTacToe/Connect4 and more
Stars: ✭ 2,617 (+3748.53%)
Mutual labels:  mcts, othello, alphago-zero, alpha-zero, alphazero, self-play
Alphazero gomoku
An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)
Stars: ✭ 2,570 (+3679.41%)
Mutual labels:  mcts, alphago-zero, alphazero
Deep-Reinforcement-Learning-for-Boardgames
Master Thesis project that provides a training framework for two player games. TicTacToe and Othello have already been implemented.
Stars: ✭ 17 (-75%)
Mutual labels:  othello, tictactoe, connect4
terminally bored terminal board games
board games for your terminal!
Stars: ✭ 53 (-22.06%)
Mutual labels:  connect-four, connect4
pedax
Reversi Board with edax, which is the strongest reversi engine.
Stars: ✭ 18 (-73.53%)
Mutual labels:  reversi, othello
alphastone
Using self-play, MCTS, and a deep neural network to create a hearthstone ai player
Stars: ✭ 24 (-64.71%)
Mutual labels:  alpha-zero, self-play
alphaFive
alphaGo版本的五子棋(gobang, gomoku)
Stars: ✭ 51 (-25%)
Mutual labels:  alphago-zero, alphazero
Elf
ELF: a platform for game research with AlphaGoZero/AlphaZero reimplementation
Stars: ✭ 3,240 (+4664.71%)
Mutual labels:  alphago-zero, alpha-zero
tictacNET
Solving Tic-Tac-Toe with Neural Networks.
Stars: ✭ 17 (-75%)
Mutual labels:  tic-tac-toe, tictactoe
muzero
A clean implementation of MuZero and AlphaZero following the AlphaZero General framework. Train and Pit both algorithms against each other, and investigate reliability of learned MuZero MDP models.
Stars: ✭ 126 (+85.29%)
Mutual labels:  mcts, alphazero
reversi
Multiplayer Reversi Game on Internet Computer
Stars: ✭ 62 (-8.82%)
Mutual labels:  reversi, othello
TicTacToe-SwiftUI
Unidirectional data flow tic-tac-toe sample with SwiftUI.
Stars: ✭ 22 (-67.65%)
Mutual labels:  tic-tac-toe, tictactoe
saltzero
Machine learning bot for ultimate tic-tac-toe based on DeepMind's AlphaGo Zero paper. C++ and Python.
Stars: ✭ 27 (-60.29%)
Mutual labels:  alphago-zero, alphazero
connect4-alpha-zero
Connect4 reinforcement learning by AlphaGo Zero methods.
Stars: ✭ 102 (+50%)
Mutual labels:  connect4, alphago-zero
AlphaZero-Renju
No description or website provided.
Stars: ✭ 17 (-75%)
Mutual labels:  alpha-zero, alphazero
tictactoe-ai-tfjs
Train your own TensorFlow.js Tic Tac Toe
Stars: ✭ 45 (-33.82%)
Mutual labels:  tic-tac-toe, tictactoe
UCThello
UCThello - a board game demonstrator (Othello variant) with computer AI using Monte Carlo Tree Search (MCTS) with UCB (Upper Confidence Bounds) applied to trees (UCT in short)
Stars: ✭ 26 (-61.76%)
Mutual labels:  mcts, othello
AlphaZero Gobang
Deep Learning big homework of UCAS
Stars: ✭ 29 (-57.35%)
Mutual labels:  mcts, alphazero
MCTS-agent-python
Monte Carlo Tree Search (MCTS) is a method for finding optimal decisions in a given domain by taking random samples in the decision space and building a search tree accordingly. It has already had a profound impact on Artificial Intelligence (AI) approaches for domains that can be represented as trees of sequential decisions, particularly games …
Stars: ✭ 22 (-67.65%)
Mutual labels:  mcts

alpha-zero

AlphaZero implementation based on "Mastering the game of Go without human knowledge" and "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" by DeepMind.

The algorithm learns to play games like Chess and Go without any human knowledge. It uses Monte Carlo Tree Search and a Deep Residual Network to evaluate the board state and play the most promising move.

Games implemented:

  1. Tic Tac Toe
  2. Othello
  3. Connect Four

Requirements

  • TensorFlow (Tested on 1.4.0)
  • NumPy
  • Python 3

Usage

To train the model from scratch.:

python main.py --load_model 0

To train the model using the previous best model as a starting point:

python main.py --load_model 1

To play a game vs the previous best model:

python main.py --load_model 1 --human_play 1

Options:

  • --num_iterations: Number of iterations.
  • --num_games: Number of self play games played during each iteration.
  • --num_mcts_sims: Number of MCTS simulations per game.
  • --c_puct: The level of exploration used in MCTS.
  • --l2_val: The level of L2 weight regularization used during training.
  • --momentum: Momentum Parameter for the momentum optimizer.
  • --learning_rate: Learning Rate for the momentum optimizer.
  • --t_policy_val: Value for policy prediction.
  • --temp_init: Initial Temperature parameter to control exploration.
  • --temp_final: Final Temperature parameter to control exploration.
  • --temp_thresh: Threshold where temperature init changes to final.
  • --epochs: Number of epochs during training.
  • --batch_size: Batch size for training.
  • --dirichlet_alpha: Alpha value for Dirichlet noise.
  • --epsilon: Value of epsilon for calculating Dirichlet noise.
  • --model_directory: Name of the directory to store models.
  • --num_eval_games: Number of self-play games to play for evaluation.
  • --eval_win_rate: Win rate needed to be the best model.
  • --load_model: Binary to initialize the network with the best model.
  • --human_play: Binary to play as a Human vs the AI.
  • --resnet_blocks: Number of residual blocks in the resnet.
  • --record_loss: Binary to record policy and value loss to a file.
  • --loss_file: Name of the file to record loss.
  • --game: Number of the game. 0: Tic Tac Toe, 1: Othello.

License

MIT License

Copyright (c) 2018 Blanyal D'Souza

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].