All Projects → paulhendricks → gym-R

paulhendricks / gym-R

Licence: other
An R package providing access to the OpenAI Gym API

Programming Languages

r
7636 projects

Projects that are alternatives of or similar to gym-R

rlflow
A TensorFlow-based framework for learning about and experimenting with reinforcement learning algorithms
Stars: ✭ 20 (-4.76%)
Mutual labels:  openai-gym, openai-universe
a3c-super-mario-pytorch
Reinforcement Learning for Super Mario Bros using A3C on GPU
Stars: ✭ 35 (+66.67%)
Mutual labels:  openai-gym
Hands On Intelligent Agents With Openai Gym
Code for Hands On Intelligent Agents with OpenAI Gym book to get started and learn to build deep reinforcement learning agents using PyTorch
Stars: ✭ 189 (+800%)
Mutual labels:  openai-gym
OpenAI-Gym-Hearts
OpenAI Gym Hearts Card Game
Stars: ✭ 21 (+0%)
Mutual labels:  openai-gym
Gymfc
A universal flight control tuning framework
Stars: ✭ 210 (+900%)
Mutual labels:  openai-gym
Deep-Reinforcement-Learning-With-Python
Master classic RL, deep RL, distributional RL, inverse RL, and more using OpenAI Gym and TensorFlow with extensive Math
Stars: ✭ 222 (+957.14%)
Mutual labels:  openai-gym
Coach
Reinforcement Learning Coach by Intel AI Lab enables easy experimentation with state of the art Reinforcement Learning algorithms
Stars: ✭ 2,085 (+9828.57%)
Mutual labels:  openai-gym
gym-mtsim
A general-purpose, flexible, and easy-to-use simulator alongside an OpenAI Gym trading environment for MetaTrader 5 trading platform (Approved by OpenAI Gym)
Stars: ✭ 196 (+833.33%)
Mutual labels:  openai-gym
gym-rs
OpenAI's Gym written in pure Rust for blazingly fast performance
Stars: ✭ 34 (+61.9%)
Mutual labels:  openai-gym
yarll
Combining deep learning and reinforcement learning.
Stars: ✭ 84 (+300%)
Mutual labels:  openai-gym
awesome-isaac-gym
A curated list of awesome NVIDIA Issac Gym frameworks, papers, software, and resources
Stars: ✭ 373 (+1676.19%)
Mutual labels:  openai-gym
Ns3 Gym
ns3-gym - The Playground for Reinforcement Learning in Networking Research
Stars: ✭ 221 (+952.38%)
Mutual labels:  openai-gym
deep-rl-docker
Docker image with OpenAI Gym, Baselines, MuJoCo and Roboschool, utilizing TensorFlow and JupyterLab.
Stars: ✭ 44 (+109.52%)
Mutual labels:  openai-gym
Deep Reinforcement Learning Gym
Deep reinforcement learning model implementation in Tensorflow + OpenAI gym
Stars: ✭ 200 (+852.38%)
Mutual labels:  openai-gym
mario
Super Mario Reinforcement Learning from Demonstration
Stars: ✭ 25 (+19.05%)
Mutual labels:  openai-gym
Tensorflow Rl
Implementations of deep RL papers and random experimentation
Stars: ✭ 176 (+738.1%)
Mutual labels:  openai-gym
Ma Gym
A collection of multi agent environments based on OpenAI gym.
Stars: ✭ 226 (+976.19%)
Mutual labels:  openai-gym
RLGC
An open-source platform for applying Reinforcement Learning for Grid Control (RLGC)
Stars: ✭ 85 (+304.76%)
Mutual labels:  openai-gym
deep rl acrobot
TensorFlow A2C to solve Acrobot, with synchronized parallel environments
Stars: ✭ 32 (+52.38%)
Mutual labels:  openai-gym
robo-gym-robot-servers
Repository containing Robot Servers ROS packages
Stars: ✭ 25 (+19.05%)
Mutual labels:  openai-gym

gym

CRAN_Status_Badge Downloads from the RStudio CRAN mirror Project Status: Active - The project has reached a stable, usable state and is being actively developed.

OpenAI Gym is a open-source Python toolkit for developing and comparing reinforcement learning algorithms. This R package is a wrapper for the OpenAI Gym API, and enables access to an ever-growing variety of environments.

Installation

You can install the latest development version from CRAN:

install.packages("gym")

Or from GitHub with:

if (packageVersion("devtools") < 1.6) {
  install.packages("devtools")
}
devtools::install_github("paulhendricks/gym-R", subdir = "R")

If you encounter a clear bug, please file a minimal reproducible example on GitHub.

Getting started

Setting up the server

To download the code and install the requirements, you can run the following shell commands:

git clone https://github.com/openai/gym-http-api
cd gym-http-api
pip install -r requirements.txt

This code is intended to be run locally by a single user. The server runs in python.

To start the server from the command line, run this:

python gym_http_server.py

For more details, please see here: https://github.com/openai/gym-http-api.

Running an example in R

In a separate R terminal, you can then try running the example agent and see what happens:

library(gym)

remote_base <- "http://127.0.0.1:5000"
client <- create_GymClient(remote_base)
print(client)

# Create environment
env_id <- "CartPole-v0"
instance_id <- env_create(client, env_id)
print(instance_id)

# List all environments
all_envs <- env_list_all(client)
print(all_envs)

# Set up agent
action_space_info <- env_action_space_info(client, instance_id)
print(action_space_info)
agent <- random_discrete_agent(action_space_info[["n"]])

# Run experiment, with monitor
outdir <- "/tmp/random-agent-results"
env_monitor_start(client, instance_id, outdir, force = TRUE, resume = FALSE)

episode_count <- 100
max_steps <- 200
reward <- 0
done <- FALSE

for (i in 1:episode_count) {
  ob <- env_reset(client, instance_id)
  for (i in 1:max_steps) {
    action <- env_action_space_sample(client, instance_id)
    results <- env_step(client, instance_id, action, render = TRUE)
    if (results[["done"]]) break
  }
}

# Dump result info to disk
env_monitor_close(client, instance_id)

Citation

To cite package ‘gym’ in publications use:

Paul Hendricks (2016). gym: Provides Access to the OpenAI Gym API. R package version 0.1.0. https://CRAN.R-project.org/package=gym

A BibTeX entry for LaTeX users is

@Manual{,
  title = {gym: Provides Access to the OpenAI Gym API},
  author = {Paul Hendricks},
  year = {2016},
  note = {R package version 0.1.0},
  url = {https://CRAN.R-project.org/package=gym},
}
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].