Kinect: From Color Space to world coordinates

穿精又带淫゛_ 提交于 2019-12-03 08:55:17
Clones1201

Check the CameraIntrinsics.

typedef struct _CameraIntrinsics
{
    float FocalLengthX;
    float FocalLengthY;
    float PrincipalPointX;
    float PrincipalPointY;
    float RadialDistortionSecondOrder;
    float RadialDistortionFourthOrder;
    float RadialDistortionSixthOrder;
}   CameraIntrinsics;

You can get it from ICoordinateMapper::GetDepthCameraIntrinsics.

Then, for every pixel (u,v,d) in depth space, you can get the coordinate in world space by doing this:

x = (u - principalPointX) / focalLengthX * d;
y = (v - principalPointY) / focalLengthY * d;
z = d;

For color space pixel, you need to first find its associated depth space pixel, which you should use ICoordinateMapper::MapCameraPointTodepthSpace. Since not all color pixel has its associated depth pixel (1920x1080 vs 512x424), you can't have the full-HD color point cloud.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!