All Projects → BYU-PCCL → Holodeck

BYU-PCCL / Holodeck

Licence: mit
High Fidelity Simulator for Reinforcement Learning and Robotics Research.

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Holodeck

Holodeck Engine
High Fidelity Simulator for Reinforcement Learning and Robotics Research.
Stars: ✭ 48 (-90.64%)
Mutual labels:  ai, robotics, drones, unreal-engine, research, reinforcement-learning, simulator
Habitat Lab
A modular high-level library to train embodied AI agents across a variety of tasks, environments, and simulators.
Stars: ✭ 587 (+14.42%)
Mutual labels:  ai, robotics, research, reinforcement-learning, simulator
Airsim
Open source simulator for autonomous vehicles built on Unreal Engine / Unity, from Microsoft AI & Research
Stars: ✭ 12,528 (+2342.11%)
Mutual labels:  ai, drones, unreal-engine, research, simulator
Gibsonenv
Gibson Environments: Real-World Perception for Embodied Agents
Stars: ✭ 666 (+29.82%)
Mutual labels:  robotics, research, reinforcement-learning, simulator
Airsim Neurips2019 Drone Racing
Drone Racing @ NeurIPS 2019, built on Microsoft AirSim
Stars: ✭ 220 (-57.12%)
Mutual labels:  robotics, drones, unreal-engine
Bullet3
Bullet Physics SDK: real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc.
Stars: ✭ 8,714 (+1598.64%)
Mutual labels:  robotics, reinforcement-learning, simulator
Carla
Open-source simulator for autonomous driving research.
Stars: ✭ 7,012 (+1266.86%)
Mutual labels:  ai, research, simulator
Redtail
Perception and AI components for autonomous mobile robotics.
Stars: ✭ 832 (+62.18%)
Mutual labels:  ai, robotics, drones
Pygame Learning Environment
PyGame Learning Environment (PLE) -- Reinforcement Learning Environment in Python.
Stars: ✭ 828 (+61.4%)
Mutual labels:  ai, research, reinforcement-learning
Habitat Sim
A flexible, high-performance 3D simulator for Embodied AI research.
Stars: ✭ 1,098 (+114.04%)
Mutual labels:  ai, robotics, simulator
Webots
Webots Robot Simulator
Stars: ✭ 1,324 (+158.09%)
Mutual labels:  ai, robotics, simulator
Allenact
An open source framework for research in Embodied-AI from AI2.
Stars: ✭ 144 (-71.93%)
Mutual labels:  ai, research, reinforcement-learning
Simulator
A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Stars: ✭ 1,260 (+145.61%)
Mutual labels:  unreal-engine, reinforcement-learning, simulator
l2r
Open-source reinforcement learning environment for autonomous racing.
Stars: ✭ 38 (-92.59%)
Mutual labels:  simulator, research, robotics
Vln Ce
Vision-and-Language Navigation in Continuous Environments using Habitat
Stars: ✭ 62 (-87.91%)
Mutual labels:  ai, robotics, research
Free Ai Resources
🚀 FREE AI Resources - 🎓 Courses, 👷 Jobs, 📝 Blogs, 🔬 AI Research, and many more - for everyone!
Stars: ✭ 192 (-62.57%)
Mutual labels:  ai, research, reinforcement-learning
Dreamerv2
Mastering Atari with Discrete World Models
Stars: ✭ 287 (-44.05%)
Mutual labels:  robotics, research, reinforcement-learning
Text summurization abstractive methods
Multiple implementations for abstractive text summurization , using google colab
Stars: ✭ 359 (-30.02%)
Mutual labels:  ai, reinforcement-learning
Yarp
YARP - Yet Another Robot Platform
Stars: ✭ 358 (-30.21%)
Mutual labels:  robotics, research
Lagom
lagom: A PyTorch infrastructure for rapid prototyping of reinforcement learning algorithms.
Stars: ✭ 364 (-29.04%)
Mutual labels:  research, reinforcement-learning

Holodeck

Holodeck Video

Read the docs badge Build Status

Holodeck is a high-fidelity simulator for reinforcement learning built on top of Unreal Engine 4.

Features

  • 7+ rich worlds for training agents in, and many scenarios for those worlds
  • Linux and Windows support
  • Easily extend and modify training scenarios
  • Train and control more than one agent at once
  • Simple, OpenAI Gym-like Python interface
  • High performance - simulation speeds of up to 2x real time are possible. Performance penalty only for what you need
  • Run headless or watch your agents learn

Questions? Join our Discord!

Installation

pip install holodeck

(requires >= Python 3.5)

See Installation for complete instructions (including Docker).

Documentation

Usage Overview

Holodeck's interface is similar to OpenAI's Gym.

We try and provide a batteries included approach to let you jump right into using Holodeck, with minimal fiddling required.

To demonstrate, here is a quick example using the DefaultWorlds package:

import holodeck

# Load the environment. This environment contains a UAV in a city.
env = holodeck.make("UrbanCity-MaxDistance")

# You must call `.reset()` on a newly created environment before ticking/stepping it
env.reset()                         

# The UAV takes 3 torques and a thrust as a command.
command = [0, 0, 0, 100]   

for i in range(30):
    state, reward, terminal, info = env.step(command)  
  • state: dict of sensor name to the sensor's value (nparray).
  • reward: the reward received from the previous action
  • terminal: indicates whether the current state is a terminal state.
  • info: contains additional environment specific information.

If you want to access the data of a specific sensor, import sensors and retrieving the correct value from the state dictionary:

print(state["LocationSensor"])

Multi Agent-Environments

Holodeck supports multi-agent environments.

Calls to step only provide an action for the main agent, and then tick the simulation.

act provides a persistent action for a specific agent, and does not tick the simulation. After an action has been provided, tick will advance the simulation forward. The action is persisted until another call to act provides a different action.

import holodeck
import numpy as np

env = holodeck.make("CyberPunkCity-Follow")
env.reset()

# Provide an action for each agent
env.act('uav0', np.array([0, 0, 0, 100]))
env.act('nav0', np.array([0, 0, 0]))

# Advance the simulation
for i in range(300):
  # The action provided above is repeated
  states = env.tick()

You can access the reward, terminal and location for a multi agent environment as follows:

task = states["uav0"]["FollowTask"]

reward = task[0]
terminal = task[1]
location = states["uav0"]["LocationSensor"]

(uav0 comes from the scenario configuration file)

Running Holodeck Headless

Holodeck can run headless with GPU accelerated rendering. See Using Holodeck Headless

Citation:

@misc{HolodeckPCCL,
  Author = {Joshua Greaves and Max Robinson and Nick Walton and Mitchell Mortensen and Robert Pottorff and Connor Christopherson and Derek Hancock and Jayden Milne and David Wingate},
  Title = {Holodeck: A High Fidelity Simulator},
  Year = {2018},
}

Holodeck is a project of BYU's Perception, Cognition and Control Lab (https://pcc.cs.byu.edu/).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].