Kinect Depth and Image Frames Alignment

人走茶凉 提交于 2019-11-29 04:17:10

This will always happen slightly because the two sensors are mounted at slightly different places.

Try it:

Look at some object with your two eyes, then try using only your left eye, then only your right eye. Things look slightly different because your two eyes are not in exactly the same place.

However: it is possible to correct a lot of the issues with some of the API codes.

I'm using Kinect for Windows 1.5, so the API's are slightly different from the 1.0.

short[] depth=new short[320*240];
// fill depth with the kinect data
ColorImagePoint[] colorPoints=new ColorImagePoint[320*240];
// convert mappings
kinect.MapDepthFrameToColorFrame(DepthImageFormat.Resolution320x240Fps30,
            depth, ColorImageFormat.RgbResolution640x480Fps30, colorPoints);
// now do something with it
for(int i=0;i<320*240;i++)
{
  if (we_want_to_display(depth[i]))
  {
    draw_on_image_at(colorPoints[i].X,colorPoints[i].Y);
  }  
}

That's the basics. If you look at the greenscreen example in the Kinect Developer Toolkit 1.5 it shows a good use for this.

This is a very common problem, something inherent in the math involved to sync the two images since the two cameras are sitting in two different places. Kind of like taking the two video feeds produced by a 3D camera and trying to sync them. They will always be slightly off.

Two ideas for correction:

  1. Manually shift the bits from the depth image as you calculate it.
  2. Add a shaker motor to the Kinect to reduce noise: http://www.youtube.com/watch?v=CSBDY0RuhS4 (I have found a simple pager motor to be effective.)

The MapDepthToColorImagePoint is for use in the Skeletal API to map a skeletal point to a depth point and then to an image point so that you can show joints on top of the RGB image.

Hope this helps.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!