All Projects → lixiny → Handeye-Calibration-ROS

lixiny / Handeye-Calibration-ROS

Licence: other
🤖 Elaborated hand-eye calibration tutorials (ROS-binding)

Programming Languages

C++
36643 projects - #6 most used programming language
CMake
9771 projects

Projects that are alternatives of or similar to Handeye-Calibration-ROS

magni robot
Starting point for Magni Robots. Includes launch files and configuration.
Stars: ✭ 33 (-58.75%)
Mutual labels:  ros
godel
ROS-Industrial Focused Technical Project: Robotic Blending
Stars: ✭ 64 (-20%)
Mutual labels:  ros
conde simulator
Autonomous Driving Simulator for the Portuguese Robotics Open
Stars: ✭ 31 (-61.25%)
Mutual labels:  ros
dockeROS
Simply running ros nodes in docker containers on remote robots.
Stars: ✭ 71 (-11.25%)
Mutual labels:  ros
haros
H(igh) A(ssurance) ROS - Static analysis of ROS application code.
Stars: ✭ 168 (+110%)
Mutual labels:  ros
urdf-rs
URDF parser using serde-xml-rs for rust
Stars: ✭ 21 (-73.75%)
Mutual labels:  ros
tr1 hardware interface
Hardware interface for ros_control for the TR1.
Stars: ✭ 29 (-63.75%)
Mutual labels:  ros
raspimouse book info
「Raspberry Piで学ぶ ROSロボット入門」の情報ポータル
Stars: ✭ 48 (-40%)
Mutual labels:  ros
point-cloud-clusters
A catkin workspace in ROS which uses DBSCAN to identify which points in a point cloud belong to the same object.
Stars: ✭ 43 (-46.25%)
Mutual labels:  ros
onine
ROS based service robot
Stars: ✭ 32 (-60%)
Mutual labels:  ros
my ROS mobile robot
Differential drive mobile robot using ROS.
Stars: ✭ 123 (+53.75%)
Mutual labels:  ros
FS19 modROS
(partial) ROS1 integration for FarmSim19
Stars: ✭ 45 (-43.75%)
Mutual labels:  ros
ros-windows-build
ROS on Windows build pipelines
Stars: ✭ 18 (-77.5%)
Mutual labels:  ros
roskinectic src
This ROS kinectic workspace src folder, which was created on Ubuntu 16.04. Here I worked on ROS1 projects like 2d & 3D SLAM, Motion Planning, SWARM of drone, RL based drone, Surveilling Robot etc.
Stars: ✭ 44 (-45%)
Mutual labels:  ros
wpr simulation
No description or website provided.
Stars: ✭ 24 (-70%)
Mutual labels:  ros
ackermann-drive-teleop
ROS keyboard and joystick teleoperation scripts for robots with ackermann steering
Stars: ✭ 33 (-58.75%)
Mutual labels:  ros
aruco localization
ROS localization node using an ArUco Marker Map.
Stars: ✭ 30 (-62.5%)
Mutual labels:  ros
ros tensorflow
This repo introduces how to integrate Tensorflow framework into ROS with object detection API.
Stars: ✭ 39 (-51.25%)
Mutual labels:  ros
spinnaker sdk camera driver
Point Grey (FLIR) Spinnaker based camera driver (Blackfly S etc.)
Stars: ✭ 106 (+32.5%)
Mutual labels:  ros
ur ws
Universal robot with robotiq hand workspace
Stars: ✭ 21 (-73.75%)
Mutual labels:  ros

Robotic Hand-eye Calibration Workspace

| Ubuntu 1804 & ROS Melodic Morenia |

This repo contains a eye-in-hand calibration tool (Cplusplus & ROS) in JD京东 GRASPING ROBOT CHALLENGE (News),
and the implements of my paper: Robotic hand-eye calibration with depth camera: A sphere model approach (PDF)

Inside /src there are 5 ROS packages:

  • rgbd_srv
    used by camera_driver.

  • camera_driver
    drive Intel® RealSense™ RGBD cameras in ROS.
    (convert raw stream to ros sensor_msgs::Image)

  • camera_transform_publisher
    publish the transformation matrix (extrinsics) between camera and a marker( chessboard | aruco tag).

  • handeye_calib_marker
    handeye calib tools that use RGB camera and a marker, modified from handeye_calib_camodocal
    ----------------------------------------------
    with the package above you can calibrate the eye-in-hand transformation on RGB images.

  • handeye_calib_sphere
    fine-tune the eye-in-hand result based on depth.

Prerequisit

  • Ubunutu 18.04
  • ROS Melodic Morenia (desktop-full install)
  • with OpenCV & Opencv_contrib install in /usr/local (both 3.4.0)
  • librealsense if you use Intel® RealSense™ RGBD cameras (D400 series and the SR300)
  • visp_hand2eye_calibration_calibrator
    $ sudo apt-get install ros-melodic-visp-hand2eye-calibration
  • glog at here (0.4.0)
  • Ceres-solver download at here (1.14.0) and instruction at here
  • Sophus at here (1.0.0)
  • ros-melodic-cv-bridge *
    $ sudo apt-get install ros-melodic-cv-bridge
  • Point Cloud Library (PCL)
    $ sudo apt-get install libpcl-dev ros-melodic-pcl-ros

Install opencv & opencv_contrib

Download opencv and opencv_contrib (both tested on 3.4.0).
Install opencv prerequisit:

[compiler] $ sudo apt-get install build-essential
[required] $ sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
[optional] $ sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev

Build opencv & opencv_contrib from source: (we turned off the cuda options)

$ mkdir build && cd build 
$ cmake \
    -D CMAKE_BUILD_TYPE=Release \
    -D OPENCV_EXTRA_MODULES_PATH= <path/to/opencv_contrib-3.4.0>/modules/ \
    -D BUILD_opencv_cudacodec=OFF \
    -D WITH_CUDA=OFF \
    -D WITH_CUBLAS=OFF \
    -D WITH_CUFFT=OFF \
    -D ENABLE_PRECOMPILED_HEADERS=OFF \
    -D CMAKE_INSTALL_PREFIX=/usr/local ..

$ make -j5
$ sudo make install

Install librealsene (if you use Intel® RealSense™ RGBD Camera)

We use SR300/D415 camera and use camera_driver package to convert raw RGBD images into ROS topic of sensor_msgs::Image. The librealsense is required only if you use cameras from RealSense™ family, otherwise make your own camera_driver.

Following the librealsense installation guide at here.
We installed: librealsense2-dkms, librealsense2-utils, librealsense2-dev, librealsense2-dbg

Install cv_bridge from source.

If you are lucky enough, the ros pre-build ros-melodic-cv-bridge works well.
However, since the default OpenCV in ros-melodic-cv-bridge is 3.2, while we use opencv & opencv_contrib in 3.4.0, we recommend you to build the cv_bridge from source.

$ git clone https://github.com/ros-perception/vision_opencv.git
$ cd vision_opencv
$ git checkout melodic
$ cd cv_bridge

$ mkdir build && cd build
$ cmake ..
$ make -j5
$ sudo make install

In the meantime, remove the pre-bulid cv_bridge in ros-melodic path /opt/ros/melodic:

$ cd /opt/ros/melodic

# or you can archive them to other place
$ sudo rm -rf lib/libcv_bridge.so
$ sudo rm -rf include/cv_bridge
$ sudo rm -rf share/cv_bridge

For more information about cv_bridge, see here.

Build

$ git clone https://github.com/lixiny/Handeye-Calibration-ROS.git
$ cd Handeye-Calibration-ROS  
$ catkin_make

if you only want to calibrate hand-eye transformation on RGB marker,

$ git clone https://github.com/lixiny/Handeye-Calibration-ROS.git  
$ cd Handeye-Calibration-ROS  
$ git checkout rgb_only
$ catkin_make

Ready to Use (Example on RealSense D415 & UR5)

0. Robot and Camera Setup

Prepare a camera support that mount camera on the robot. We provide the support files [.STL] for UR5 and RealSense D415 & SR300.

Navigate to the Handeye-Calibration-ROS/ folder, and source devel/setup.bash in each new terminal.

1. Pluged in Camera, Bringup RealSense

$1 roslaunch camera_driver realsense_driver.launch

The camera intrinsic (later we call it intr) will appear in current terminal, record it.

The camera_driver publishes 3 topics to rosmaster:

  • /realsense/rgb
  • /realsense/depth
  • /realsense/cloud
    the point cloud is under tf frame: /camera_link
  • /realsense/camera_info
    the camera_info followed ros sensor_msgs/CameraInfo.msg, we only use K and D in it.

You can also visualize these topics in rviz.

2. Prepare a Marker

You can either use a chessboard or an aruco plane at doc/rawArucoPlane.jpg we designed.

3. Publish the Transformation (tf) Between Camera and Marker

😄 Chessboard

After you placed the chessboard inside camera view, run:

$2 roslaunch camera_transform_publisher chessboard_publisher_realsense.launch

# or specify chessboard parameters:
$2 roslaunch camera_transform_publisher chessboard_publisher_realsense.launch \
        chessboardWidth:=10  chessboardHeight:=7  squareSize:=0.02

[IMPORTANT] There are 5 user-specified parameters in chessboard_publisher_realsense.launch:

  • chessboardWidth -- # of inner corners on chessboard width direction
  • chessboardHeight -- # of inner corners on chessboard height direction
  • squareSize -- length of side of each square on chessboard, in meter.
  • cameraTopic -- the rgb topic name published in your camera_driver, in ours: /realsense/rgb
  • cameraInfoTopic -- the camera_info topic name published in your camera_driver, (/realsense/camera_info)

Make sure you have already checked them based on your chessboard.

😉 Aruco Plane

After you placed our aruco plane inside camera view, run:

$2 roslaunch camera_transform_publisher aruco_publisher_realsense.launch

# or specify aruco plane parameters:
$2 roslaunch camera_transform_publisher aruco_publisher_realsense.launch \
        tagSideLen:=0.035 planeSideLen:=0.25 

[IMPORTANT] There are 4 user-specified parameters in aruco_publisher_realsense.launch:

  • tagSideLen -- side length of a single aruco tag, in meter. (see illustration)
  • planeSideLen -- side length of the whole aruco plane, in meter.(see illustration)
  • cameraTopic -- the rgb topic name published in your camera_driver, in ours: /realsense/rgb
  • cameraInfoTopic -- the camera_info topic name published in your camera_driver, (/realsense/camera_info)

Make sure you have already checked them based on the physical length of your aruco plane.

Now in the pop-up window, you will see an AR cube like this:

This camera_transform_publisher will publish a tf (/camera_link, /ar_marker) to ros master. If no marker found, a Identity SE3 tf will be used. You can also visualize the two frame in rviz.

4. Bringup UR

Followed the instrction #3. Bringup UR in doc/install_ur.md.

5. Launch Hand-eye Calibration on Marker

$3 roslaunch handeye_calib_marker handeye_online.launch

[IMPORTANT] There are 4 user-specified parameters in handeye_online.launch:

  • ARTagTF -- the name of marker frame (defined in camera_transform_publisher, /ar_marker)
  • cameraTF -- the name of camera frame (defined in camera_transform_publisher, /camera_link)
  • EETF -- the name of End Effector frame (defined by UR, /ee_link)
  • baseTF -- the name of robot base frame (defined by UR, /base_link)

Check these 4 tf names before you launch calibraion!

start calibration

Repeatedly move UR end effector to different configuration. Meanwhile, make sure at each unique configuration, a valid marker can be detected. In current terminal, press s to record one AX=XB equation. After sufficient # of equations have been recorded (30+), press q to perform calibraition. Then in current terminal, you will see some output like:

publish the results

modify the file: handeye_calib_marker/launch/show_result.launch, replace the ${Translation: x y z} and ${Rotation: qx qy qz qw} based on your terminal output.

<launch>
  <node pkg="tf" 
        type="static_transform_publisher" 
        name="realsense_link" 
        args="${Translation: x y z}
              ${Rotation: qx qy qz qw}
              /ee_link 
              /camera_link 
              100"
  />
</launch>

and launch:

$4 roslaunch handeye_calib_marker show_result.launch 

or, in a new terminal:

# rosrun tf static_transform_publisher \
#           x y z qx qy qz qw \
#           frame_id  child_frame_id  period_in_ms

$4 rosrun tf static_transform_publisher \
        0.0263365 0.0389317 0.0792014 -0.495516 0.507599 -0.496076 0.500715 \
        /ee_link   /camera_link   100

visualize in rviz

There are several representations/expressions of the HAND-EYE problem:

  • HAND-EYE is the transformation from /ee_link to /camera_link
  • or tf: (/ee_link, /camera_link)
  • or father: /ee_link, child: /camera_link
  • or: Pee_link = Thandeye * Pcamera_link

NOTE: now you should have a ready-to-use handeye transformation.
This results is optimized only based on RGB images. You can still fine-tune the result with depth image (if has) following setp # 6.

6. (Optional) Fine-tune on Sphere

// TODO

Related Publications:

Yang, Lixin, et al. " Robotic hand-eye calibration with depth camera: A sphere model approach. " 2018 4th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2018. PDF

@inproceedings{yang2018robotic,
  title={Robotic hand-eye calibration with depth camera: A sphere model approach},
  author={Yang, Lixin and Cao, Qixin and Lin, Minjie and Zhang, Haoruo and Ma, Zhuoming},
  booktitle={2018 4th International Conference on Control, Automation and Robotics (ICCAR)},
  pages={104--110},
  year={2018},
  organization={IEEE}
}

License

This repo is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the license for further details

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].