How to convert points in depth space to color space in Kinect without using Kinect SDK functions?
问题 I am doing a augmented reality application with 3D objects overlay on top of color video of user. Kinect version 1.7 is used and rendering of virtual objects are done in OpenGL. I have manage to overlay 3D objects on depth video successfully simply by using the intrinsic constants for depth camera from the NuiSensor.h header and compute a projection matrix based on the formula I have found on http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/. The 3D objects rendered with this