All Projects → toybox-rs → Toybox

toybox-rs / Toybox

The Machine Learning Toybox for testing the behavior of autonomous agents.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Toybox

Chatbot cn
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Stars: ✭ 791 (+3064%)
Mutual labels:  reinforcement-learning
Bombora
My experimentations with Reinforcement Learning in Pytorch
Stars: ✭ 18 (-28%)
Mutual labels:  reinforcement-learning
Ciff
Cornell Instruction Following Framework
Stars: ✭ 23 (-8%)
Mutual labels:  reinforcement-learning
Deeprec
推荐、广告工业界经典以及最前沿的论文、资料集合/ Must-read Papers on Recommendation System and CTR Prediction
Stars: ✭ 822 (+3188%)
Mutual labels:  reinforcement-learning
Textworld
​TextWorld is a sandbox learning environment for the training and evaluation of reinforcement learning (RL) agents on text-based games.
Stars: ✭ 895 (+3480%)
Mutual labels:  reinforcement-learning
Slm Lab
Modular Deep Reinforcement Learning framework in PyTorch. Companion library of the book "Foundations of Deep Reinforcement Learning".
Stars: ✭ 904 (+3516%)
Mutual labels:  reinforcement-learning
Coursera
Quiz & Assignment of Coursera
Stars: ✭ 774 (+2996%)
Mutual labels:  reinforcement-learning
Deepgtav
A plugin for GTAV that transforms it into a vision-based self-driving car research environment.
Stars: ✭ 926 (+3604%)
Mutual labels:  reinforcement-learning
Aim
Aim — a super-easy way to record, search and compare 1000s of ML training runs
Stars: ✭ 894 (+3476%)
Mutual labels:  reinforcement-learning
Advanced Deep Learning With Keras
Advanced Deep Learning with Keras, published by Packt
Stars: ✭ 917 (+3568%)
Mutual labels:  reinforcement-learning
Tensorlayer
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥
Stars: ✭ 6,796 (+27084%)
Mutual labels:  reinforcement-learning
Pygame Learning Environment
PyGame Learning Environment (PLE) -- Reinforcement Learning Environment in Python.
Stars: ✭ 828 (+3212%)
Mutual labels:  reinforcement-learning
Walk the blocks
Implementation of Scheduled Policy Optimization for task-oriented language grouding
Stars: ✭ 22 (-12%)
Mutual labels:  reinforcement-learning
Tradinggym
Trading and Backtesting environment for training reinforcement learning agent or simple rule base algo.
Stars: ✭ 813 (+3152%)
Mutual labels:  reinforcement-learning
Unity Ml Environments
This repository features game simulations as machine learning environments to experiment with deep learning approaches such as deep reinforcement learning inside of Unity.
Stars: ✭ 23 (-8%)
Mutual labels:  reinforcement-learning
Super Mario Bros A3c Pytorch
Asynchronous Advantage Actor-Critic (A3C) algorithm for Super Mario Bros
Stars: ✭ 775 (+3000%)
Mutual labels:  reinforcement-learning
Sc2atari
Convert sc2 environment to gym-atari and play some mini-games
Stars: ✭ 19 (-24%)
Mutual labels:  reinforcement-learning
Chainerrl
ChainerRL is a deep reinforcement learning library built on top of Chainer.
Stars: ✭ 931 (+3624%)
Mutual labels:  reinforcement-learning
Deeplearning Trader
backtrader with DRL ( Deep Reinforcement Learning)
Stars: ✭ 24 (-4%)
Mutual labels:  reinforcement-learning
Paac.pytorch
Pytorch implementation of the PAAC algorithm presented in Efficient Parallel Methods for Deep Reinforcement Learning https://arxiv.org/abs/1705.04862
Stars: ✭ 22 (-12%)
Mutual labels:  reinforcement-learning

The Reinforcement Learning Toybox CI

A set of games designed for testing deep RL agents.

If you use this code, or otherwise are inspired by our white-box testing approach, please cite our NeurIPS workshop paper:

@inproceedings{foley2018toybox,
  title={{Toybox: Better Atari Environments for Testing Reinforcement Learning Agents}},
  author={Foley, John and Tosch, Emma and Clary, Kaleigh and Jensen, David},
  booktitle={{NeurIPS 2018 Workshop on Systems for ML}},
  year={2018}
}

We have a lenghtier paper on ArXiV and can provide a draft of a non-public paper on our acceptance testing framework by request (email at etosch at cs dot umass dot edu).

How accurate are your games?

Watch four minutes of agents playing each game. Both ALE implementations and Toybox implementations have their idiosyncracies, but the core gameplay and concepts have been captured. Pull requests always welcome to improve fidelity.

Where is the actual Rust code?

The rust implementations of the games have moved to a different repository: toybox-rs/toybox-rs

Installation

pip install ctoybox
pip install git+https://github.com/toybox-rs/Toybox

Play the games (using pygame)

pip install ctoybox pygame
python -m ctoybox.human_play breakout
python -m ctoybox.human_play amidar
python -m ctoybox.human_play space_invaders

Run the tests

  1. Create a virtual environment using your python3 installation: ${python} -m venv .env
    • If you are on OSX, this is likely python3: thus, your command will be python3 -m venv .env
    • If you are not sure of your version, run python --version
  2. Activate your virtual environment: source .env/bin/activate
  3. Run pip install -r REQUIREMENTS.txt
  4. Install baselines: cd baselines && python setup.py isntall && cd ..
  5. Run python setup.py install
  6. Run python -m unittest toybox.sample_tests.test_${GAME}.${TEST_NAME}

We have observed installation issues on OSX Catalina; if you get a linker error for ujson library, you can try running with the CFLAGS argument:

CFLAGS=-stdlib=libc++ pip install ujson

If this does not work, the code will simply default back to the existing json library.

Python

Tensorflow, OpenAI Gym, OpenCV, and other libraries may or may not break with various Python versions. We have confirmed that the code in this repository will work with the following Python versions:

  • 3.5

Get starting images for reference from ALE / atari_py

./scripts/utils/start_images --help

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].