All Projects → kindredresearch → Senseact

kindredresearch / Senseact

Licence: bsd-3-clause
SenseAct: A computational framework for developing real-world robot learning tasks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Senseact

Rex Gym
OpenAI Gym environments for an open-source quadruped robot (SpotMicro)
Stars: ✭ 684 (+347.06%)
Mutual labels:  robot, reinforcement-learning
Gym Duckietown
Self-driving car simulator for the Duckietown universe
Stars: ✭ 379 (+147.71%)
Mutual labels:  robot, reinforcement-learning
Navbot
Using RGB Image as Visual Input for Mapless Robot Navigation
Stars: ✭ 111 (-27.45%)
Mutual labels:  robot, reinforcement-learning
Awesome Robotics
A curated list of awesome links and software libraries that are useful for robots.
Stars: ✭ 478 (+212.42%)
Mutual labels:  robot, reinforcement-learning
Articulations Robot Demo
Stars: ✭ 145 (-5.23%)
Mutual labels:  robot, reinforcement-learning
Rl Book Challenge
self-studying the Sutton & Barto the hard way
Stars: ✭ 146 (-4.58%)
Mutual labels:  reinforcement-learning
Energy Py
Reinforcement learning for energy systems
Stars: ✭ 148 (-3.27%)
Mutual labels:  reinforcement-learning
Openbot
OpenBot leverages smartphones as brains for low-cost robots. We have designed a small electric vehicle that costs about $50 and serves as a robot body. Our software stack for Android smartphones supports advanced robotics workloads such as person following and real-time autonomous navigation.
Stars: ✭ 2,025 (+1223.53%)
Mutual labels:  robot
Sumo Rl
A simple interface to instantiate Reinforcement Learning environments with SUMO for Traffic Signal Control. Compatible with Gym Env from OpenAI and MultiAgentEnv from RLlib.
Stars: ✭ 145 (-5.23%)
Mutual labels:  reinforcement-learning
Iccv2019 Learningtopaint
ICCV2019 - A painting AI that can reproduce paintings stroke by stroke using deep reinforcement learning.
Stars: ✭ 1,995 (+1203.92%)
Mutual labels:  reinforcement-learning
Tradzqai
Trading environnement for RL agents, backtesting and training.
Stars: ✭ 150 (-1.96%)
Mutual labels:  reinforcement-learning
Minimalrl
Implementations of basic RL algorithms with minimal lines of codes! (pytorch based)
Stars: ✭ 2,051 (+1240.52%)
Mutual labels:  reinforcement-learning
Show Adapt And Tell
Code for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017
Stars: ✭ 146 (-4.58%)
Mutual labels:  reinforcement-learning
Djim100 People Detect Track
A ros demo for people detection and tracking on DJI M100 drone
Stars: ✭ 150 (-1.96%)
Mutual labels:  robot
Tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Stars: ✭ 11,865 (+7654.9%)
Mutual labels:  reinforcement-learning
Openkai
OpenKAI: A modern framework for unmanned vehicle and robot control
Stars: ✭ 150 (-1.96%)
Mutual labels:  robot
Rosnodejs
Client library for writing ROS nodes in JavaScript with nodejs
Stars: ✭ 145 (-5.23%)
Mutual labels:  robot
Open Quadruped
An open-source 3D-printed quadrupedal robot. Intuitive gait generation through 12-DOF Bezier Curves. Full 6-axis body pose manipulation. Custom 3DOF Leg Inverse Kinematics Model accounting for offsets.
Stars: ✭ 148 (-3.27%)
Mutual labels:  reinforcement-learning
Dingdang Robot
🤖 叮当是一款可以工作在 Raspberry Pi 上的中文语音对话机器人/智能音箱项目。
Stars: ✭ 1,826 (+1093.46%)
Mutual labels:  robot
Study Reinforcement Learning
Studying Reinforcement Learning Guide
Stars: ✭ 147 (-3.92%)
Mutual labels:  reinforcement-learning

SenseAct: A computational framework for real-world robot learning tasks

License

This repository provides the implementation of several reinforcement learning tasks with multiple real-world robots. These tasks come with an interface similar to OpenAI-Gym so that learning algorithms can be plugged in easily and in a uniform manner across tasks. All the tasks here are implemented based on a computational framework of robot-agent communication proposed by Mahmood et al. (2018a), which we call SenseAct. In this computational framework, agent and environment-related computations are ordered and distributed among multiple concurrent processes in a specific way. By doing so, SenseAct enables the following:

  • Timely communication between the learning agent and multiple robotic devices with reduced latency,
  • Easy and systematic design of robotic tasks for reinforcement learning agents,
  • Facilitate reproducible real-world reinforcement learning.

This repository provides the following real-world robotic tasks, which are proposed by Mahmood et al. (2018b) as benchmark tasks for reinforcement learning algorithms:

Universal-Robots (UR) robotic arms:

Tested on UR Software v. 3.3.4.310

UR-Reacher-2
UR-Reacher-2
UR-Reacher-6
UR-Reacher-6

Dynamixel (DXL) actuators:

Currently we only support MX-64AT.

DXL-Reacher
DXL-Reacher
DXL-Tracker
DXL-Tracker

iRobot Create 2 robots:

Create-Mover
Create-Mover
Create-Docker
Create-Docker

Mahmood et al. (2018b) provide extensive results comparing multiple reinforcement learning algorithms on the above tasks, and Mahmood et al. (2018a) show the effect of different task-setup elements in learning. Their results can be reproduced by using this repository (see documentation for more information).

Versions

The branch master is the latest official release and dev is current development branch.

Installation

SenseAct uses Python3 (>=3.5), and all other requirements are automatically installed via pip.

On Linux and Mac OS X, run the following:

  1. git clone https://github.com/kindredresearch/SenseAct.git
  2. cd SenseAct
  3. pip install -e . or pip3 install -e . depends on your setup

To replicate experimental results from the paper please install the tag v0.1.1 from the git repo.

  1. git fetch --all --tags
  2. git checkout tags/v0.1.1

Additional instruction for installing OpenAI Baselines needed for running the advanced examples is given in the corresponding readme.

Additional installation steps for Dynamixel-based tasks (Linux only)

Dynamixels can be controlled by drivers written using either ctypes by Robotis or pyserial, which can be chosen by passing either True (ctypes) or False (pyserial) as an argument to the use_ctypes_driver parameter of a Dynamixel-based task (e.g., see examples/advanced/dxl_reacher.py). We found the ctypes-based driver to provide substantially more timely and precise communication compared to the pyserial-based one.

In order to use the CType-based driver, we need to install gcc and relevant packages for compiling the C libraries:

sudo apt-get install gcc-5 build-essential gcc-multilib g++-multilib

Then run the following script to download and compile the Dynamixel driver C libraries:

sudo bash setup_dxl.sh

For additional setup and troubleshooting information regarding Dynamixels, please see DXL Docs.

Tests

You can check whether SenseAct is installed correctly by running the included unit tests.

cd SenseAct
python -m unittest discover -b

Support

Installation problems? Feature requests? General questions?

Acknowledgments

This project is developed by the Kindred AI Research team. Rupam Mahmood, Dmytro Korenkevych, and Brent Komer originally developed the computational framework and the UR tasks. William Ma developed the Create 2 tasks and contributed substantially by adding new features to SenseAct. Gautham Vasan developed the DXL tasks. Francois Hogan developed the simulated task.

James Bergstra provided support and guidance throughout the development. Adrian Martin, Scott Rostrup, and Jonathan Yep developed the pyserial DXL driver for a Kindred project, which was used for the SenseAct DXL Communicator. Daniel Snider, Oliver Limoyo, Dylan Ashley, and Craig Sherstan tested the framework, provided thoughtful suggestions, and confirmed the reproducibility of learning by running experiments on real robots.

Citing SenseAct

For the SenseAct computational framework and the UR-Reacher tasks, please cite Mahmood et al. (2018a). For the DXL and the Create 2 tasks, please cite Mahmood et al. (2018b).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].