All Projects → JasonChu1313 → KinectUtil

JasonChu1313 / KinectUtil

Licence: other
This project solves the problem of mismatching between rgb camera and depth camera of Kinect camera. And we can get higher quality point cloud model than Kinect itself. We solve the problem by first using both DLT and Zhangzhengyou‘s checkerboard to calibrate the camera, and then applying the calibrated parameters to project and re-project from…

Programming Languages

C++
36643 projects - #6 most used programming language

Projects that are alternatives of or similar to KinectUtil

Volumetriccapture
A multi-sensor capture system for free viewpoint video.
Stars: ✭ 243 (+767.86%)
Mutual labels:  kinect, calibration
Handeye calib camodocal
Easy to use and accurate hand eye calibration which has been working reliably for years (2016-present) with kinect, kinectv2, rgbd cameras, optical trackers, and several robots including the ur5 and kuka iiwa.
Stars: ✭ 364 (+1200%)
Mutual labels:  kinect, calibration
pygac
A python package to read and calibrate NOAA and Metop AVHRR GAC and LAC data
Stars: ✭ 14 (-50%)
Mutual labels:  calibration
ATtiny13-TinyTacho
Simple RPM-Meter
Stars: ✭ 36 (+28.57%)
Mutual labels:  project
BloodBank
A simple android project for blood management system.
Stars: ✭ 126 (+350%)
Mutual labels:  project
dctb-web-project
Repositório informativo com diretrizes empíricas para o desenvolvimento de um Projeto Web.
Stars: ✭ 59 (+110.71%)
Mutual labels:  project
blockchain-carbon-accounting
This project implements blockchain applications for climate action and accounting, including emissions calculations, carbon trading, and validation of climate claims. It is part of the Linux Foundation's Hyperledger Climate Action and Accounting SIG.
Stars: ✭ 123 (+339.29%)
Mutual labels:  dlt
mimic
mimic calibration
Stars: ✭ 18 (-35.71%)
Mutual labels:  calibration
DPB
Dynamic Project Builder
Stars: ✭ 22 (-21.43%)
Mutual labels:  project
Example
Metarhia application example for Node.js
Stars: ✭ 147 (+425%)
Mutual labels:  project
Robotlib.jl
Robotics library written in the Julia programming language
Stars: ✭ 32 (+14.29%)
Mutual labels:  calibration
StructureNet
Markerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).
Stars: ✭ 38 (+35.71%)
Mutual labels:  calibration
hashport-validator
Official repository containing the source code of the Hashport validators
Stars: ✭ 19 (-32.14%)
Mutual labels:  dlt
Topics-In-Modern-Statistical-Learning
Materials for STAT 991: Topics In Modern Statistical Learning (UPenn, 2022 Spring) - uncertainty quantification, conformal prediction, calibration, etc
Stars: ✭ 74 (+164.29%)
Mutual labels:  calibration
KinectToVR
KinectToVR EX (Official)
Stars: ✭ 163 (+482.14%)
Mutual labels:  kinect
Vision
Computer Vision And Neural Network with Xamarin
Stars: ✭ 54 (+92.86%)
Mutual labels:  calibration
university
Open Source app to view Free resources available online.
Stars: ✭ 23 (-17.86%)
Mutual labels:  project
ideas-for-project-names-starting-with-re
No description or website provided.
Stars: ✭ 27 (-3.57%)
Mutual labels:  project
essex
Essex - Boilerplate for Docker Based Projects
Stars: ✭ 32 (+14.29%)
Mutual labels:  project
new-project-template
A template for web developers.
Stars: ✭ 12 (-57.14%)
Mutual labels:  project

KinectUtil

This project solves the problem of mismatching between rgb camera and depth camera of Kinect camera. And we can get higher quality point cloud model than Kinect itself. We solve the problem by firstly using both DLT and Zhangzhengyou‘s checkerboard to calibrate the camera, and then applying the calibrated parameters to project and reproject from the image captured by RGB and depth camera.

This project serves as a tutorial for calibrating Kinect camera and tool for project and reproject between different coordinate systems.

Some Basic Terminology

Point cloud:

Point cloud is a kind of 3D representation, and It is composed by a series of orderless points with 3D coordinate values, we can use RGB and Depth camera to produce the point cloud, the image captured by RGB camera is used to provide color information, and The depth camera is used to provide depth information.

image.png

Parameters of camera model

Camera model can be shown as follow, the process of projecting the 3D environment to the 2D image have 2 steps: image.png

    1. Translate and Rotate the 3D points from the world space to camera space by multiplying extrinsic parameters [R T].
    1. Project the 3D points in camera space to the 2D by multiplying the intrinsic parameter K.

Intrinsic parameter.png

If we have the intrinsic and extrinsic parameters of the camera we can also back project the 2D coordinate to 3D coordinate.

Description of Pinhole model & Intrinsic

The above camera model we talk about is Pinhole model, the image can be projected to a 2D plane as blow. image.png

According to the triangle similarity relation, there is

image.png The difference between the pixel coordinate system and the imaging plane is a scale and shift:

image.png image.png Among the equation: image.png So, finally we get: image.png It can be wrote to the matrix format: image.png

  • f is the focal length of the lens

  • The units of alpha and beta are pixels

  • The focal length of fx and fy is the focal length in the direction of x and y, and the unit is pixel.

  • (CX, CY) main point, the center of the image, unit as pixel

  • fnormal_x,fnormal_y are Normalized focal length of X and Y in the direction of X and Y

Why we need to calibrate the RGB and Depth Camera

Circumstances 1:

Because the the rotation and translation of two cameras relative to each other are not perfect, so when generating the point cloud it will cause the color mismatch, which means the color image can be project to the wrong place and it will influence the quality of the point cloud. The mismatch of color is shown as below: image.png

The black color are not suppose to be the part of background

image.png

==Kinect API encapsulates these parameters and the generating process into interface, and you can only invoke it as a black box, you can't see the implementation of the Kinect API which is not open source, so you can't know what happens inside the function, and it will generate the point cloud after you calling that interface. So we can't get access to the camera parameters provided by the manufacturer. If you are not satisfied with the quality of the point cloud, you need to calibrate by yourself.==

Circumstances 2:

If the resolution and quality of RGB image captured by Kinect RGB camera can not reach you expectation, you can block the original RGB camera and attach a high resolution RGB camera of yours to the Kinect, under this circumstances , you much calibrate the parameters and generate point cloud by yourself.

How Can We Do That:

The process of calibration and generating point cloud can be simplified as:

    1. Using calibration tools to calibrate the RGB and Depth cameras, get the intrinsic parameters of RGB camera, intrinsic parameters of Depth camera, and the relative translation and rotation of RGB camera to the depth camera.
    1. Back project the RGB image to RGB camera space by using the intrinsic parameters of RGB camera.
    1. Back project the Depth image to Depth camera space by using the intrinsic parameters of Depth camera.
    1. Transform the RGB camera space to Depth camera space by using the rotation and translation of the RGB relative to the Depth camera.

Detailed Calibration Step:

  1. Get this matlab tool here:  http://www.vision.caltech.edu/bouguetj/calib_doc/ We would need to download TOOLBOX_calib zip folder and store it somewhere. Then just update our path in matlab that it will see it. 

  2. Then we would need to get intrinsics following this example.  http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html. We would need checkerboard detections (their are done in a semi-manual fashion) that have to be saved in the file as "Calib_Results.mat" (which store the detections and other information needed for extrinsics calibration.). Do this both for color and depth (ir) images separately. Store the images separately too.

NOTE: don't forget to change default dX=dY=30mm=default values, to your size of the checkerboard.

  1. The infrared emitted by v1 will have lot of noise which will affect corner detection.

image.png

So we used the infrared emitted by v2 and I blocked the IR launcher of v1 which will cause Infrared speckle point that can interfere with corner detection. Or you can use the infrared emitted by some other light source. The experiment setting up is shown as follow:

image.png

  1. Then follow this example to get extrinsics: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example5.html. Then, set recompute_intrinsic_left = 0, recompute_intrinsic_right = 0. It makes sure intrinsic will not be changed when computing the extrinsics. Because the process of calculating the extrinsics can adjust the intrinsic parameters to lower the error, but we don't want the intrinsic to be changed. The toolbox can restore the camera model, this is the two camera of my case. image.png

Calibration Result:

NOTE: The intrinsic parameters and extrinsic parameters may vary from different Kinect Camera, the data calibrate by me just can serve as a reference.

Intrinsic parameters of RGB camera

%-- Focal length: fc = [ 589.322232303107740 ; 589.849429472609130 ];

%-- Principal point: cc = [ 321.140896612950880 ; 235.563195335248370 ];

%-- Skew coefficient: alpha_c = 0.000000000000000;

%-- Distortion coefficients: kc = [ 0.108983708314500 ; -0.239830795000911 ; -0.001984065259398 ; -0.002884597433015 ; 0.000000000000000 ];

%-- Focal length uncertainty: fc_error = [ 9.099860081548242 ; 8.823165349428608 ];

%-- Principal point uncertainty: cc_error = [ 5.021414252987091 ; 4.452250648666922 ];

%-- Skew coefficient uncertainty: alpha_c_error = 0.000000000000000;

%-- Distortion coefficients uncertainty: kc_error = [ 0.016941790847544 ; 0.054649912671684 ; 0.002535217040902 ; 0.003239597678439 ; 0.000000000000000 ];

Intrinsic parameters of Depth camera

%-- Focal length: fc = [ 458.455478616934780 ; 458.199272745572390 ];

%-- Principal point: cc = [ 343.645038678435410 ; 229.805975111304460 ];

%-- Skew coefficient: alpha_c = 0.000000000000000;

%-- Distortion coefficients: kc = [ -0.127613248346941 ; 0.470606007302241 ; 0.000048478690145 ; 0.017448057052172 ; 0.000000000000000 ];

%-- Focal length uncertainty: fc_error = [ 27.559874091123589 ; 26.899212523852725 ];

%-- Principal point uncertainty: cc_error = [ 12.766795489113061 ; 9.541088535770328 ];

%-- Skew coefficient uncertainty: alpha_c_error = 0.000000000000000;

%-- Distortion coefficients uncertainty: kc_error = [ 0.065356450579064 ; 0.291607962509204 ; 0.006925403021099 ; 0.008487031884390 ; 0.000000000000000 ];

Extrinsic parameters of Depth camera

Rotation vector:

om = [ 0.00321 -0.05390 0.00084 ] +- [ 0.00262 0.00276 0.00238 ]

Translation vector:

T = [ -19.94800 -0.74458 -10.84871 ] +- [ 1.79424 1.68961 1.38079 ]

Generate Point Cloud:

After getting the camera parameters, I have written the interface of generating the point cloud, the ProjectUtil.cpp and ProjectUtil.hpp contains the point cloud generation process.

The Result:

We can see that the color mismatch reduced a lot in our method case, and we get better result than Kinect API.

By Kinect: image.png

image.png

By Our Method:

image.png

image.png

image.png

Thanks for reading, if you have any further problems, please feel free to contact me, this work is done when I worked as a research assistant in Hong Kong University Computer Vision & Graphics Lab.

Copyright by - HKU , Siyu Zhu

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].