All Projects → maximecb → Gym Miniworld

maximecb / Gym Miniworld

Licence: apache-2.0
Simple 3D interior simulator for RL & robotics research

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Gym Miniworld

Gym Ignition
Framework for developing OpenAI Gym robotics environments simulated with Ignition Gazebo
Stars: ✭ 97 (-71.3%)
Mutual labels:  robotics, reinforcement-learning, openai-gym, simulation
Pytorch Rl
This repository contains model-free deep reinforcement learning algorithms implemented in Pytorch
Stars: ✭ 394 (+16.57%)
Mutual labels:  robotics, reinforcement-learning, openai-gym
Awesome Robotics
A curated list of awesome links and software libraries that are useful for robots.
Stars: ✭ 478 (+41.42%)
Mutual labels:  robotics, reinforcement-learning, simulation
Gym Dart
OpenAI Gym environments using DART
Stars: ✭ 20 (-94.08%)
Mutual labels:  robotics, reinforcement-learning, openai-gym
Awesome Real World Rl
Great resources for making Reinforcement Learning work in Real Life situations. Papers,projects and more.
Stars: ✭ 234 (-30.77%)
Mutual labels:  robotics, reinforcement-learning, simulation
Gymfc
A universal flight control tuning framework
Stars: ✭ 210 (-37.87%)
Mutual labels:  robotics, reinforcement-learning, openai-gym
Rex Gym
OpenAI Gym environments for an open-source quadruped robot (SpotMicro)
Stars: ✭ 684 (+102.37%)
Mutual labels:  robotics, reinforcement-learning, openai-gym
Gym Panda
An OpenAI Gym Env for Panda
Stars: ✭ 29 (-91.42%)
Mutual labels:  robotics, reinforcement-learning, openai-gym
Bullet3
Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.
Stars: ✭ 8,714 (+2478.11%)
Mutual labels:  robotics, reinforcement-learning, simulation
Hand dapg
Repository to accompany RSS 2018 paper on dexterous hand manipulation
Stars: ✭ 88 (-73.96%)
Mutual labels:  robotics, reinforcement-learning, simulation
Articulations Robot Demo
Stars: ✭ 145 (-57.1%)
Mutual labels:  robotics, reinforcement-learning, simulation
Ravens
Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
Stars: ✭ 133 (-60.65%)
Mutual labels:  robotics, reinforcement-learning, openai-gym
Mjrl
Reinforcement learning algorithms for MuJoCo tasks
Stars: ✭ 162 (-52.07%)
Mutual labels:  robotics, reinforcement-learning, simulation
smart grasping sandbox
A public sandbox for Shadow's Smart Grasping System
Stars: ✭ 69 (-79.59%)
Mutual labels:  robotics, simulation
bht-ams-playerstage
Player/Stage SLAM
Stars: ✭ 35 (-89.64%)
Mutual labels:  robotics, simulation
Rigidbodydynamics.jl
Julia implementation of various rigid body dynamics and kinematics algorithms
Stars: ✭ 184 (-45.56%)
Mutual labels:  robotics, simulation
jiminy
Jiminy: a fast and portable Python/C++ simulator of poly-articulated systems with OpenAI Gym interface for reinforcement learning
Stars: ✭ 90 (-73.37%)
Mutual labels:  robotics, openai-gym
robotic-warehouse
Multi-Robot Warehouse (RWARE): A multi-agent reinforcement learning environment
Stars: ✭ 62 (-81.66%)
Mutual labels:  robotics, simulation
atc-reinforcement-learning
Reinforcement learning for an air traffic control task. OpenAI gym based simulation.
Stars: ✭ 37 (-89.05%)
Mutual labels:  simulation, openai-gym
RAWSim-O
A simulation framework for Robotic Mobile Fulfillment Systems
Stars: ✭ 82 (-75.74%)
Mutual labels:  robotics, simulation

MiniWorld (gym-miniworld)

Build Status

Contents:

Introduction

MiniWorld is a minimalistic 3D interior environment simulator for reinforcement learning & robotics research. It can be used to simulate environments with rooms, doors, hallways and various objects (eg: office and home environments, mazes). MiniWorld can be seen as an alternative to VizDoom or DMLab. It is written 100% in Python and designed to be easily modified or extended.

Features:

  • Few dependencies, less likely to break, easy to install
  • Easy to create your own levels, or modify existing ones
  • Good performance, high frame rate, support for multiple processes
  • Lightweight, small download, low memory requirements
  • Provided under a permissive MIT license
  • Comes with a variety of free 3D models and textures
  • Fully observable top-down/overhead view available
  • Domain randomization support, for sim-to-real transfer
  • Ability to display alphanumeric strings on walls
  • Ability to produce depth maps matching camera images (RGB-D)

Limitations:

  • Graphics are basic, nowhere near photorealism
  • Physics are very basic, not sufficient for robot arms or manipulation

Please use this bibtex if you want to cite this repository in your publications:

@misc{gym_miniworld,
  author = {Chevalier-Boisvert, Maxime},
  title = {gym-miniworld environment for OpenAI Gym},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/maximecb/gym-miniworld}},
}

List of publications & submissions using MiniWorld (please open a pull request to add missing entries):

This simulator was created as part of work done at Mila.

Installation

Requirements:

  • Python 3.5+
  • OpenAI Gym
  • NumPy
  • Pyglet (OpenGL 3D graphics)
  • GPU for 3D graphics acceleration (optional)

You can install all the dependencies with pip3:

git clone https://github.com/maximecb/gym-miniworld.git
cd gym-miniworld
pip3 install -e .

If you run into any problems, please take a look at the troubleshooting guide, and if you're still stuck, please open an issue on this repository to let us know something is wrong.

Usage

Testing

There is a simple UI application which allows you to control the simulation or real robot manually. The manual_control.py application will launch the Gym environment, display camera images and send actions (keyboard commands) back to the simulator or robot. The --env-name argument specifies which environment to load. See the list of available environments for more information.

./manual_control.py --env-name MiniWorld-Hallway-v0

# Display an overhead view of the environment
./manual_control.py --env-name MiniWorld-Hallway-v0 --top_view

There is also a script to run automated tests (run_tests.py) and a script to gather performance metrics (benchmark.py).

Reinforcement Learning

To train a reinforcement learning agent, you can use the code provided in the /pytorch-a2c-ppo-acktr directory. This code is a modified version of the RL code found in this repository. I recommend using the PPO algorithm and 16 processes or more. A sample command to launch training is:

python3 main.py --algo ppo --num-frames 5000000 --num-processes 16 --num-steps 80 --lr 0.00005 --env-name MiniWorld-Hallway-v0

Then, to visualize the results of training, you can run the following command. Note that you can do this while the training process is still running. Also note that if you are running this through SSH, you will need to enable X forwarding to get a display:

python3 enjoy.py --env-name MiniWorld-Hallway-v0 --load-dir trained_models/ppo
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].