All Projects → OteRobotics → realant

OteRobotics / realant

Licence: MIT license
RealAnt robot platform for low-cost, real-world reinforcement learning

Programming Languages

python
139335 projects - #7 most used programming language
C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to realant

ROBOTIS-OP3
ROS packages for the ROBOTIS OP3
Stars: ✭ 56 (+40%)
Mutual labels:  robot, dynamixel
open manipulator simulations
ROS Simulation for OpenManipulator
Stars: ✭ 15 (-62.5%)
Mutual labels:  robot, dynamixel
icra20-hand-object-pose
[ICRA 2020] Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands
Stars: ✭ 42 (+5%)
Mutual labels:  robot, pose-estimation
EvoArm
An open-source 3D-printable robotic arm
Stars: ✭ 114 (+185%)
Mutual labels:  robot, dynamixel
emanual
Welcome to the ROBOTIS e-Manual ! The e-Manual page rendered from this repository is available for everyone. Just simply click the provided link below :)
Stars: ✭ 105 (+162.5%)
Mutual labels:  robot, dynamixel
dynamixel-workbench
ROS packages for Dynamixel controllers, msgs, single_manager, toolbox, tutorials
Stars: ✭ 91 (+127.5%)
Mutual labels:  robot, dynamixel
turtlebot3 msgs
ROS msgs package for TurtleBot3
Stars: ✭ 53 (+32.5%)
Mutual labels:  robot, dynamixel
Iros20 6d Pose Tracking
[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
Stars: ✭ 113 (+182.5%)
Mutual labels:  robot, pose-estimation
BipedalWalkingRobots
Linear Inverted Pendulum Model based bipedal walking
Stars: ✭ 67 (+67.5%)
Mutual labels:  robot
Robot Arm Write Chinese
使用uArm Swift Pro机械臂写中文-毛笔字
Stars: ✭ 57 (+42.5%)
Mutual labels:  robot
Spatio temporal voxel layer
A new voxel layer leveraging modern 3D graphics tools to modernize navigation environmental representations
Stars: ✭ 246 (+515%)
Mutual labels:  robot
multi car racing
An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.
Stars: ✭ 58 (+45%)
Mutual labels:  reinforcement-learning-environments
awesome-vacuum
A curated list of free and open source software and hardware to build and control a robot vacuum.
Stars: ✭ 187 (+367.5%)
Mutual labels:  robot
pouchrobot
An AI robot to collaborate in any open source project on GitHub
Stars: ✭ 39 (-2.5%)
Mutual labels:  robot
Ros robotics projects
Example codes of new book ROS Robotics Projects
Stars: ✭ 240 (+500%)
Mutual labels:  robot
Makelangelo Software
Software for plotters - especially the wall-hanging polargraph also called Makelangelo.
Stars: ✭ 248 (+520%)
Mutual labels:  robot
Skycam
Moving a weight hung on four cables pulled by motors at the top corners of a box
Stars: ✭ 25 (-37.5%)
Mutual labels:  robot
iknet
Inverse kinematics estimation of ROBOTIS Open Manipulator X with neural networks
Stars: ✭ 27 (-32.5%)
Mutual labels:  dynamixel
1ZLAB PyEspCar
1ZLab在准备挑选合适的小车来研发计算机视觉的教程时候 , 发现习惯了Python语法的我们, 在市面上找不到合适小车, 后来我们选了ESP32作为小车的控制主板, 可以使用Python对其进行交互式编程, 极大的提升了开发效率.
Stars: ✭ 78 (+95%)
Mutual labels:  robot
G-Code-Arduino-Library
Allows any machines and robots to be controlled by G-Code
Stars: ✭ 44 (+10%)
Mutual labels:  robot

RealAnt

The RealAnt robot platform from Ote Robotics is designed for real-world reinforcement learning research and development. It is a complete solution with a web camera based tracking system. It is aimed to be a low-cost starting point for anybody interested in bringing reinforcement learning to practical real robots.

RealAnt-v1.1/v1.2

This repository provides 3D models and build instructions for making your own RealAnt, firmware code for the robot microcontroller, and Python code for the robot interface and ArUco tag based pose estimation.

If you want to get the robot ready assembled, you can buy them from https://shop.oterobotics.com.

For an example implementation of reinforcement learning that learns to stand, turn and walking, see https://github.com/AaltoVision/realant-rl.

Getting Started

You can get to reinforcement learning in the real world with five steps:

  1. Get or build your own RealAnt.
  2. Calibrate and test the web camera for pose estimation.
  3. Setup testing scene with adequate lighting.
  4. Start Python Server scripts.
  5. Start experiments.

Get Your RealAnt

If you want to get the robot ready assembled, you can buy them from https://shop.oterobotics.com.

To build your own, you need to obtain 8 Robotis Dynamixel AX-12A's and one Robotis OpenCM9.04A board + accessory set or Ote Robotics RealAnt Main Board v1.2 with Arduino Nano 33 IoT (see board_1.2 folder for PCB gerbers, BOM, layout and schematics PDFs).

For building, you need additionally some two-wire cable for power, a soldering iron, side cutters, a Phillips screwdriver and thread-locking fluid (such as Loctite blue).

The 3D model files are under stl folder. You need to print two body torso plates and four leg assemblies.

For PrusaSlicer, use 20% gyroid infill, 0.2mm layer height, no top and bottom layers.

RealAnt Main Board Firmware

For the OpenCM9.04A board, install Arduino and setup Robotis OpenCM9.04 board support, and then upload the OpenCM9.04 firmware from ant11_cmd_dxl folder.

For the Ote Robotics RealAnt Main Board v1.2 with Arduino Nano 33 IoT, install Arduino and use firmware from ant14_cmd_dxl_nano33iot folder. You also need to install the forked DynamixelSDK from this folder into Arduino libraries.

Pose Estimation Calibration and Testing

Print calibration chessboard from markers on e.g. A4 and glue them to a flat surface (e.g. piece of cardboard). Print the calibration chessboard at 100% scale, this produces 4cm chessboard pattern. (If you use a different scaling, change calibrate_camera.py script's marker size accordingly.) Clipping at the edges of the outermost squares doesn't matter, as only the inner corners of the squares are used.

Capture various poses of the chessboard (15+ images) with capture.py and then run calibrate_camera.py. Move the position of the chessboard around, so that it would cover the whole field of view of the camera. Tilting the chessboard also can yield better calibration. Make sure your chessboard poses are free from motion blur, delete the ones that are blurry.

After this, run calibrate_camera.py to obtain cam_calib.pkl file.

# mkdir CAMERANAME
# cd CAMERANAME
# python ../capture.py
# python ../calibrate_camera.py 

After calibrating the camera, adjust showaruco_board.py such that it uses the correct cam_calib.pkl file.

Print and attach reference and moving agent markers to floor and the ant, adjust their sizes in the showaruco_board.py script and run the script to estimate distance. Make sure to use correct camera calibration file.

# python showaruco_board.py

Setup The Test Scene

Make sure you have adequate lighting so that there is no motion blur in pose estimation during robot movement.

For example, 2x 50W LED floodlights rated at 3800 lumens mounted 1 m above the robot should yield around 3000 lux illuminance at the robot level, which is enough for reinforcement learning purposes with Logitech Brio 4K USB camera.

Python Interface Server

For starting trials, start the following scripts in their own terminal windows:

# python antproxy.py         # zmq pub-sub communication proxy
# python ant_server.py       # server for serial communication to the physical ant
# python showaruco_board.py  # pose estimation

You might need to adjust ant_server.py for the correct serial port device. It defaults to /dev/ttyACM0 on Linux.

Test scripts:

# python ant_send_cmd.py     # send simple test movements to ant

Reinforcement Learning

For an example implementation of reinforcement learning that learns to stand, turn and walking, see https://github.com/AaltoVision/realant-rl.

Clone that repository, and then run the scripts:

# rollout_server.py             # random exploration & agent evaluation
# train_client.py --task walk   # train for walking

Further Interface Implementation Details

The main components are

  • antproxy.py (the main pub-sub network proxy, this should work as-is, as it just receives and forwards packages as a message hub — just run this on the background),
  • showaruco_board.py (this sends orientation and location info) and
  • ant_server.py (this processes the s1...s8 servo commands, and sends joint position values and (optional) foot sensors).

The pub-sub network protocol is very simple. Each message is a list of byte strings. The beginning of the first string is the "topic" in ZMQ terms (used with zmq.SUBSCRIBE to filter in messages, "" receives everything):

  • "cmd" is a command that the ant_server.py receives. This takes a multipart string argument that is forwarded to the firmware. Currently supported commands are "sN X sM Y ..." where N,M = servo number in [1,8] and X,Y = angle setpoint in [224...800], 512 = middle, and "reset", "attach_servos" and "detach_servos".
  • "{" which is json data, which is used both for all measurements (and recorded setpoints) from ant_server.py and also for showaruco_board.py measurement packets.

RealAnt Revision History

Version Description
RealAnt v1 First version with FP04-2 hip joints and OpenCM9.04 microcontroller board
RealAnt v1.1 Hip joints replaced with 3D printed ones, OpenCM9.04 microcontroller board
RealAnt v1.2 Microcontroller board changed to Arduino Nano 33 IoT based one

Copyright and License

Unless otherwise noted, all source code, documentation and data in this repository is Copyright (c) 2020 Ote Robotics Ltd and is licensed under MIT license.

See LICENSE for details.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].