All Projects → wenbowen123 → icra20-hand-object-pose

wenbowen123 / icra20-hand-object-pose

Licence: other
[ICRA 2020] Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects
objective c
16641 projects - #2 most used programming language
c
50402 projects - #5 most used programming language
python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to icra20-hand-object-pose

Iros20 6d Pose Tracking
[IROS 2020] se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains
Stars: ✭ 113 (+169.05%)
Mutual labels:  robot, manipulation, pose-estimation
kPAM
kPAM: Generalizable Robotic Manipulation
Stars: ✭ 73 (+73.81%)
Mutual labels:  manipulation, pose-estimation
awesome-vacuum
A curated list of free and open source software and hardware to build and control a robot vacuum.
Stars: ✭ 187 (+345.24%)
Mutual labels:  robot, robots
sixi
Sixi Robot Arm
Stars: ✭ 23 (-45.24%)
Mutual labels:  robot, robots
good robot
"Good Robot! Now Watch This!": Repurposing Reinforcement Learning for Task-to-Task Transfer; and “Good Robot!”: Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer
Stars: ✭ 84 (+100%)
Mutual labels:  manipulation, grasping
Robotics-Object-Pose-Estimation
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.
Stars: ✭ 153 (+264.29%)
Mutual labels:  manipulation, pose-estimation
cvxpnpl
A Perspective-n-Points-and-Lines method.
Stars: ✭ 56 (+33.33%)
Mutual labels:  registration, pose-estimation
Unity Robotics Hub
Central repository for tools, tutorials, resources, and documentation for robotics simulation in Unity.
Stars: ✭ 439 (+945.24%)
Mutual labels:  robot, manipulation
penny
3 servos, 10 dollars hexapod
Stars: ✭ 26 (-38.1%)
Mutual labels:  robot, robots
robot hacking manual
Robot Hacking Manual (RHM). From robotics to cybersecurity. Papers, notes and writeups from a journey into robot cybersecurity.
Stars: ✭ 169 (+302.38%)
Mutual labels:  robot, robots
community-projects
Webots projects (PROTO files, controllers, simulation worlds, etc.) contributed by the community.
Stars: ✭ 20 (-52.38%)
Mutual labels:  robot, robots
Articulations Robot Demo
Stars: ✭ 145 (+245.24%)
Mutual labels:  robot, manipulation
realant
RealAnt robot platform for low-cost, real-world reinforcement learning
Stars: ✭ 40 (-4.76%)
Mutual labels:  robot, pose-estimation
movenet.pytorch
A Pytorch implementation of MoveNet from Google. Include training code and pre-trained model.
Stars: ✭ 273 (+550%)
Mutual labels:  pose-estimation
FuyaoBot
A QQ bot bases on Mirai, Spring Boot, MySQL and Mybatis Plus.
Stars: ✭ 30 (-28.57%)
Mutual labels:  robot
string-combinations
A simple, low-memory footprint function to generate all string combinations from a series of characters.
Stars: ✭ 25 (-40.48%)
Mutual labels:  manipulation
tf-cpn
Cascade Pyramid Netwrok
Stars: ✭ 22 (-47.62%)
Mutual labels:  pose-estimation
aistplusplus api
API to support AIST++ Dataset: https://google.github.io/aistplusplus_dataset
Stars: ✭ 277 (+559.52%)
Mutual labels:  pose-estimation
maks
Motion Averaging
Stars: ✭ 52 (+23.81%)
Mutual labels:  registration
BipedalWalkingRobots
Linear Inverted Pendulum Model based bipedal walking
Stars: ✭ 67 (+59.52%)
Mutual labels:  robot

icra20_hand_object_pose

This is the official implementation of "Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands" published in ICRA 2020. [PDF]

@article{wen2020robust,
  title={Robust, Occlusion-aware Pose Estimation for Objects Grasped by Adaptive Hands},
  author={Wen, Bowen and Mitash, Chaitanya and Soorian, Sruthi and Kimmel, Andrew and Sintov, Avishai and Bekris, Kostas E},
  journal={International Conference on Robotics and Automation (ICRA) 2020},
  year={2020}
}

About

Many manipulation tasks, such as placement or within-hand manipulation, require the object’s pose relative to a robot hand. The task is difficult when the hand significantly occludes the object. It is especially hard for adaptive hands, for which it is not easy to detect the finger’s configuration. In addition, RGB-only approaches face issues with texture-less objects or when the hand and the object look similar. This paper presents a depth-based framework, which aims for robust pose estimation and short response times. It could be integrated with tracking-based methods to provide initialization or recovery from lost tracking. The approach detects the adaptive hand’s state via efficient parallel search given the highest overlap between the hand’s model and the point cloud. The hand’s point cloud is pruned and robust global registration is performed to generate object pose hypotheses, which are clustered. False hypotheses are pruned via physical reasoning. The remaining poses’ quality is evaluated given agreement with observed data.

Supplementary Video:

Click to watch

Click to watch video

Dependencies

  • Linux (tested on Ubuntu 16.04)
  • PCL (tested on 1.9)
  • OpenCV
  • ROS (tested on Kinetic)
  • yaml-cpp

Datasets

Download the datasets used for evaluation in the paper.

Some example RGB images from real world data:

Some example RGB images from synthetic data:

Install

bash build.sh
source devel/setup.bash

Example run on real world data

  1. After you download object and hand models, change these paths in the config_autodataset.yaml file. More explanations are in the yaml file.
out_dir: [your_out_dir]
rgb_path: [your_path]/example/rgb7.png
depth_path: [your_path]/example/depth7.png
palm_in_baselink: [your_path]/example/palm_in_base7.txt
leftarm_in_base: [your_path]/example/arm_left_link_7_t_7.txt
model_name: ellipse
object_model_path: [your_path]/ellipse.ply
object_mesh_path: [your_path]/raw/ellipse.obj
ppf_path: [your_path]/ppf_ellipse

urdf_path: [your_path]/meshes/hand_T42b.urdf

Hand:
  base_link:
    mesh: [your_path]/meshes/raw/base.obj
    convex_mesh: [your_path]/meshes/raw/base_convex.obj
    cloud: [your_path]/meshes/raw/base.ply
  swivel_1:
    mesh: [your_path]/meshes/raw/swivel_t42.obj
    convex_mesh: [your_path]/meshes/raw/swivel_t42_convex.obj
    cloud: [your_path]/meshes/raw/swivel_t42.ply
  swivel_2:
    mesh: [your_path]/meshes/raw/swivel_t42.obj
    convex_mesh: [your_path]/meshes/raw/swivel_t42_convex.obj
    cloud: [your_path]/meshes/raw/swivel_t42.ply
  finger_1_1:
    mesh: [your_path]/meshes/raw/proximal_t42_airtight.obj
    convex_mesh: [your_path]/meshes/raw/proximal_t42_convex.obj
    cloud: [your_path]/meshes/raw/proximal_t42_airtight.ply
  finger_1_2:
    mesh: [your_path]/meshes/raw/distal_round_t42_airtight.obj
    convex_mesh: [your_path]/meshes/raw/distal_round_t42_convex.obj
    cloud: [your_path]/meshes/raw/distal_round_t42_airtight.ply
  finger_2_1:
    mesh: [your_path]/meshes/raw/proximal_t42_airtight.obj
    convex_mesh: [your_path]/meshes/raw/proximal_t42_convex.obj
    cloud: [your_path]/meshes/raw/proximal_t42_airtight.ply
  finger_2_2:
    mesh: [your_path]/meshes/raw/distal_round_t42_airtight.obj
    convex_mesh: [your_path]/meshes/raw/distal_round_t42_convex.obj
    cloud: [your_path]/meshes/raw/distal_round_t42_airtight.ply
rosrun icra20_manipulation_pose  main_realdata_auto [path_to_your_config_file]

In the end, the program will save estimated object (best.obj), hand (hand.ply) and the entire scene's point cloud (scene_normals.ply) in the out_dir your specified. Load them in 3D visualizer (e.g. Meshlab) and you are able to see something like this:

Notes

Currently it only supports Yale Hand T42. However, it is possible to extend to many other hands, with some adaptations though.

Due to the randomness and parallel implementation, the results may slightly vary each time you run, but the overall evaluation results should be similar to the paper. Sometimes it may be even possible to get higher results than in the paper (as we have observed for "cuboid"), that is because more base sampling time is permitted in current configuration. If other accuracy-speed trade-off is desired, feel free to play around the parameters, e.g. "super4pcs_success_quadrilaterals", "n_gen" in "config_autodataset.yaml".

Acknowledgement

We would like to acknowledge the support of NSF awards IIS-1734492, IIS-1723869, CCF-1934924. Award #1734492 promoted the collaboration with Aaron Dollar's GRAB Lab at Yale and the use of soft adaptive hands. We would like to thank the GRAB Lab in their guidance for building and working with the adaptive hands. Link to hand model used from the Yale OpenHand Project.

License

License for Non-Commercial Use

If this software is redistributed, this license must be included.
The term software includes any source files, documentation, executables, models, and data.

This software is available for general use by academic or non-profit,
or government-sponsored researchers. This license does not grant the
right to use this software or any derivation of it for commercial activities. For commercial use, please contact us at Rutgers University by [email protected] and [email protected]

This software comes with no warranty or guarantee of any kind. By using this software, the user accepts full liability.
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].