All Projects → vladfi1 → Phillip

vladfi1 / Phillip

Licence: gpl-3.0
The SSBM "Phillip" AI.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Phillip

Pygame Learning Environment
PyGame Learning Environment (PLE) -- Reinforcement Learning Environment in Python.
Stars: ✭ 828 (+62.99%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Top Deep Learning
Top 200 deep learning Github repositories sorted by the number of stars.
Stars: ✭ 1,365 (+168.7%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Mit Deep Learning
Tutorials, assignments, and competitions for MIT Deep Learning related courses.
Stars: ✭ 8,912 (+1654.33%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Trending Deep Learning
Top 100 trending deep learning repositories sorted by the number of stars gained on a specific day.
Stars: ✭ 543 (+6.89%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Applied Reinforcement Learning
Reinforcement Learning and Decision Making tutorials explained at an intuitive level and with Jupyter Notebooks
Stars: ✭ 229 (-54.92%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+1280.31%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Exploration By Disagreement
[ICML 2019] TensorFlow Code for Self-Supervised Exploration via Disagreement
Stars: ✭ 99 (-80.51%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Deep Trading Agent
Deep Reinforcement Learning based Trading Agent for Bitcoin
Stars: ✭ 573 (+12.8%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Atari Model Zoo
A binary release of trained deep reinforcement learning models trained in the Atari machine learning benchmark, and a software release that enables easy visualization and analysis of models, and comparison across training algorithms.
Stars: ✭ 198 (-61.02%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Airsim
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
Stars: ✭ 12,528 (+2366.14%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Visual Pushing Grasping
Train robotic agents to learn to plan pushing and grasping actions for manipulation with deep reinforcement learning.
Stars: ✭ 516 (+1.57%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Reinforcement Learning
Learn Deep Reinforcement Learning in 60 days! Lectures & Code in Python. Reinforcement Learning + Deep Learning
Stars: ✭ 3,329 (+555.31%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Snake
Artificial intelligence for the Snake game.
Stars: ✭ 1,241 (+144.29%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Scalphagozero
An independent implementation of DeepMind's AlphaGoZero in Scala, using Deeplearning4J (DL4J)
Stars: ✭ 144 (-71.65%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Gdrl
Grokking Deep Reinforcement Learning
Stars: ✭ 304 (-40.16%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Lagom
lagom: A PyTorch infrastructure for rapid prototyping of reinforcement learning algorithms.
Stars: ✭ 364 (-28.35%)
Mutual labels:  artificial-intelligence, deep-reinforcement-learning
Tsdf Fusion Python
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
Stars: ✭ 464 (-8.66%)
Mutual labels:  artificial-intelligence
Rl a3c pytorch
A3C LSTM Atari with Pytorch plus A3G design
Stars: ✭ 482 (-5.12%)
Mutual labels:  deep-reinforcement-learning
Poker ai
🤖 An Open Source Texas Hold'em AI
Stars: ✭ 462 (-9.06%)
Mutual labels:  artificial-intelligence
Rl Book
Source codes for the book "Reinforcement Learning: Theory and Python Implementation"
Stars: ✭ 464 (-8.66%)
Mutual labels:  deep-reinforcement-learning

The Phillip AI

An SSBM player based on Deep Reinforcement Learning.

Requirements

Tested on: Ubuntu >=14.04, OSX, Windows 7/8/10.

  1. The dolphin emulator. You will probably need to compile from source on Linux. On Windows you'll need to install a custom dolphin version - just unpack the zip somewhere.
  2. The SSBM iso image. Tested with NTSC 1.02, but other versions will probably work too.
  3. Python 3. On Windows, you can use Anaconda which sets up the necessary paths. You can also use the linux subsytem on Windows 10.
  4. pip3 install tensorflow==1.14, or tensorflow-gpu==1.14 if you plan on training with an nvidia gpu. Phillip doesn't depend on tensorflow so that you can choose which one you want to use.
  5. Install phillip.
cd path/to/phillip # future commands should be run from here
pip3 install -e . # "." is the path to the current directory - don't omit!

Installing in editable mode (-e) allows you to make local changes without reinstalling, which is useful if you are using a cloned repo and want to update by pulling (git pull).

If cloning, you may wish to use --depth 1 to avoid downloading large files from phillip's history (most of which are now gone and should be purged from git). These are the saved agents, which are in the process of being moved to git large file storage. Currently the best agents such as agents/delay18/FalcoBF live there. To get it:

sudo apt-get install git-lfs # on ubuntu; for other systems see the website
git-lfs install
git-lfs pull

As an alternative, you can download a zip with all the agents.

Play

You will need to know where dolphin is located. On Mac the dolphin path will be ~/../../Applications/Dolphin.app/Contents/MacOS/Dolphin. If dolphin-emu is already on your PATH then you can omit this.

python3 phillip/run.py --gui --human --start 0 --reload 0 --epsilon 0 --load agents/FalconFalconBF --iso /path/to/SSBM.iso --exe /path/to/dolphin [--windows]

Trained agents are stored in the agents directory. Aside from FalconFalconBF, the agents in agents/delay0/ are also fairly strong. Run with --help to see all options.

Windows Notes

  • The --exe will be the path to the Binary\x64\Dolphin.exe you unzipped. In general, the forward /s should be back \s for all paths, unless you are using MinGW, Cygwin, git bash, or some other unix shell emulator.
  • You may need to omit the 3 from commands like python3 and pip3.
  • If not using Anaconda, you will likely need to modify your PATH so that python is visible to the command prompt.
  • Communication with dolphin is done over the local loopback interface, enabled with the --tcp 1 flag (now implied by --windows). You may also need to open port 5555 in your firewall.
  • If on Windows 10 you can do everything in the Linux subsystem and follow the linux instructions, except for obtaining dolphin. You will need to pass in an explicit user directory with --user tmp (the temp directories that python creates start with /tmp/... and aren't valid for windows dolphin).

Train

Training is controlled by phillip/train.py. See also runner.py and launcher.py for training massively in parallel on slurm clusters. Phillip has been trained at the MGHPCC. It is recommended to train with a custom dolphin which uses zmq to synchronize with the AI - the below commands will likely fail otherwise.

Local training is also possible. First, edit runner.py with your desired training params (advanced). Then do:

python3 runner.py # will output a path
python3 launcher.py saves/path/ --init --local [--agents number_of_agents] [--log_agents]

To view stats during training:

tensorboard --logdir logs/

The trainer and (optionally) agents redirect their stdout/err to slurm_logs/. To end training:

kill $(cat saves/path/pids)

To resume training run launcher.py again, but omit the --init (it will overwrite your old network).

Training on Windows is not supported.

Thanks to microsoftv there is now an instructional video as well!

Support

Come to the Discord!

Recordings

I've been streaming practice play over at http://twitch.tv/x_pilot. There are also some recordings on my youtube channel.

Credits

Big thanks to altf4 for getting me started, and to spxtr for a python memory watcher. Some code for dolphin interaction has been borrowed from both projects (mostly the latter now that I've switched to pure python).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].