All Projects → cjm715 → mgym

cjm715 / mgym

Licence: other
A collection of multi-agent reinforcement learning OpenAI gym environments

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to mgym

SARNet
Code repository for SARNet: Learning Multi-Agent Communication through Structured Attentive Reasoning (NeurIPS 2020)
Stars: ✭ 14 (-65.85%)
Mutual labels:  gym, multiagent-reinforcement-learning
gym-cryptotrading
OpenAI Gym Environment API based Bitcoin trading environment
Stars: ✭ 111 (+170.73%)
Mutual labels:  gym, gym-environment
CartPole
Run OpenAI Gym on a Server
Stars: ✭ 16 (-60.98%)
Mutual labels:  gym
genstar
Generation of Synthetic Populations Library
Stars: ✭ 17 (-58.54%)
Mutual labels:  multiagent
proto
Proto-RL: Reinforcement Learning with Prototypical Representations
Stars: ✭ 67 (+63.41%)
Mutual labels:  gym
flutter
Flutter fitness/workout app for wger
Stars: ✭ 106 (+158.54%)
Mutual labels:  gym
rl-medical
Communicative Multiagent Deep Reinforcement Learning for Anatomical Landmark Detection using PyTorch.
Stars: ✭ 36 (-12.2%)
Mutual labels:  multiagent-reinforcement-learning
trading gym
a unified environment for supervised learning and reinforcement learning in the context of quantitative trading
Stars: ✭ 36 (-12.2%)
Mutual labels:  gym-environment
Mava
A library of multi-agent reinforcement learning components and systems
Stars: ✭ 355 (+765.85%)
Mutual labels:  multiagent
Explorer
Explorer is a PyTorch reinforcement learning framework for exploring new ideas.
Stars: ✭ 54 (+31.71%)
Mutual labels:  gym
gym-management
Gym Management System provides an easy to use interface for the users and a database for the admin to maintain the records of gym members.
Stars: ✭ 27 (-34.15%)
Mutual labels:  gym
gym-battlesnake
Multi-agent reinforcement learning environment
Stars: ✭ 29 (-29.27%)
Mutual labels:  gym
gym-microrts-paper-sb3
RL agent to play μRTS with Stable-Baselines3 and PyTorch
Stars: ✭ 21 (-48.78%)
Mutual labels:  gym-environment
fenics-DRL
Repository from the paper https://arxiv.org/abs/1908.04127, to train Deep Reinforcement Learning in Fluid Mechanics Setup.
Stars: ✭ 40 (-2.44%)
Mutual labels:  gym
rSoccer
🎳 Environments for Reinforcement Learning
Stars: ✭ 26 (-36.59%)
Mutual labels:  gym
Pytorch-RL-CPP
A Repository with C++ implementations of Reinforcement Learning Algorithms (Pytorch)
Stars: ✭ 73 (+78.05%)
Mutual labels:  gym
es pytorch
High performance implementation of Deep neuroevolution in pytorch using mpi4py. Intended for use on HPC clusters
Stars: ✭ 20 (-51.22%)
Mutual labels:  gym
rocket-league-gym
A Gym-like environment for Reinforcement Learning in Rocket League
Stars: ✭ 107 (+160.98%)
Mutual labels:  gym-environment
wolpertinger ddpg
Wolpertinger Training with DDPG (Pytorch), Deep Reinforcement Learning in Large Discrete Action Spaces. Multi-GPU/Singer-GPU/CPU compatible.
Stars: ✭ 44 (+7.32%)
Mutual labels:  gym
squadgym
Environment that can be used to evaluate reasoning capabilities of artificial agents
Stars: ✭ 27 (-34.15%)
Mutual labels:  gym

Multi-agent gym environments

snakeclip

This repository has a collection of multi-agent OpenAI gym environments.

DISCLAIMER: This project is still a work in progress.

Dependencies

  • gym
  • numpy

Installation

git clone https://github.com/cjm715/mgym.git
cd mgym/
pip install -e .

Environments

Examples

import gym
import mgym
import random

env = gym.make('TicTacToe-v0')
fullobs = env.reset()
while True:
    print('Player O ') if fullobs[0] else print('Player X')
    a = random.choice(env.get_available_actions())
    fullobs,rewards,done,_ = env.step(a)
    env.render()
    if done:
        break
import gym
import mgym

env = gym.make('MatchingPennies-v0')
env.reset(3)
while True:
    a = env.action_space.sample()
    _,r,done,_ = env.step(a)
    env.render()
    if done:
        break

See further examples in mgym/examples/examples.ipynb.

How are multi-agent environments different than single-agent environments?

When dealing with multiple agents, the environment must communicate which agent(s) can act at each time step. This information must be incorporated into observation space. Conversely, the environment must know which agents are performing actions. Therefore this must be communicated in the action passed to the environment. The form of the API used for passing this information depends on the type of game. The two types are

  • one-at-a-time play (like TicTacToe, Go, Monopoly, etc) or
  • simultaneous play (like Soccer, Basketball, Rock-Paper-Scissors, etc).

In the TicTacToe example above, this is an instance of one-at-a-time play. The fullobs is a tuple (next_agent, obs). The variable next_agent indicates which agent will act next. obs is the typical observation of the environment state. The action a is also a tuple given by a = (acting_agent, action) where the acting_agent is the agent acting with the action given by variable action.

Testing

To run tests, install pytest with pip install pytest and run python -m pytest

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].