How can I convert color stream (1920x1080) into depth stream(512x424) in Kinect V2 using matlab or C#

蓝咒 提交于 2019-12-12 03:27:53

问题


Kinect V2 color stream supported format is : 1920x1080. But kinect V2 depth stream format is : 512x424. Now when I start live steam for both sensors then they have different sizes because of different resolution. I cant resize them, because I need coordinates . so when I resize using Imresize(),the coordinates are not matched. I already read matlab documentation.They said hardware only supports this two format respectively.Now How can i do this in code so that both stream have the same resolution. I tried two days long but failed.Moreover, I want to do it by any process so that i take first depth image and based on this depth resolution it will take RGB or color image.

My project is I take the line from depth image and map them on RGB image of kinect v2. but there resolution are not same. so [x,y] cordinates changed. so when I map it on RGB it not matched with the coordintes of depth image. how can i solve it ?. I thought i will change the resolution but in kinect V2 resoution cant change.Now how can i do it in coding.

Here is link who did like this.i want to do it in matlab or c#


回答1:


In c# you can use the CoordinateMapper to map points from one space to another. So to map from depth space to color space you hook up to the MultiSourceFrameArrived event for color and depth source and create a handler like this

  private void MultiFrameReader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
  {
        MultiSourceFrame multiSourceFrame = e.FrameReference.AcquireFrame();
        if (multiSourceFrame == null)
        {
            return;
        }


        using (ColorFrame colorFrame = multiSourceFrame.ColorFrameReference.AcquireFrame())
        {
            if (colorFrame == null) return;

            using (DepthFrame depthFrame = multiSourceFrame.DepthFrameReference.AcquireFrame())
            {
                if (colorFrame == null) return;

                using (KinectBuffer buffer = depthFrame.LockImageBuffer())
                {
                    ColorSpacePoint[] colorspacePoints = new ColorSpacePoint[depthFrame.FrameDescription.Width * depthFrame.FrameDescription.Height];
                    kinectSensor.CoordinateMapper.MapDepthFrameToColorSpaceUsingIntPtr(buffer.UnderlyingBuffer, buffer.Size, colorspacePoints);
                    //A depth point that we want the corresponding color point
                    DepthSpacePoint depthPoint = new DepthSpacePoint() { X=250, Y=250};

                    //The corrseponding color point
                    ColorSpacePoint targetPoint = colorspacePoints[(int)(depthPoint.Y * depthFrame.FrameDescription.Height + depthPoint.X)];

                }
            }
        }  
    }

The colorspacePoints array contains for each pixel in the depthFrame the corresponding point in the colorFrame You should also check if the targetPoint has X or Y infinity, that means that there is no corresponding pixel in the target space




回答2:


For working example, you can check VRInteraction. I map depth image to RGB image to build up 3D point cloud.

What you want to achieved is called Registration.

  1. Calibrate Depth camera to find the Depth camera projection matrix (Using opencv)
  2. Calibrate RGB camera to find the RGB camera projection matrix (Using opencv)

    - You can register Depth image to RGB image:

Which is mapping the corresponding RGB pixel of the given Depth image. This will end up with a resolution of 1920x1080 RGB-Depth image. Not all the RGB pixels will have a depth value since there are less depth pixels. For this you need to

  • calculate real world ordinates() of each depth pixel using Depth camera projection matrix
  • calculate the coordinates of the RGB pixel of that previosly calculated real world ordinates
  • Find the matching pixel in the RGB image using previously calculated coordinates of the RGB pixel

    - You can register RGB image image to Depth image:

Which is mapping the corresponding Depth pixel of the given RGB image. This will end up with a resolution of 512x424 RGB-Depth image. For this you need to

  • calculate real world ordinates() of each RGB pixel using Depth camera projection matrix
  • calculate the coordinates of the depth pixel of that previously calculated real world ordinates
  • Find the matching pixel in the depth image using previously calculated coordinates of the RGB pixel

If you want to achieved this in real-time, you will need to consider about using GPU accelerating. Specially if your Depth image contains more than 30000 depth points.

I wrote my masters theses on this matter. if you have more questions, I'm more than happy to help you.




回答3:


You will need to resample (imresize in Matlab) if you want to overlay both arrays (e.g. to create an RGBD image). Note that the field of view is different on depth and color, i.e. the far right and left of the color image is not part of the depth image, and the top and bottom of the depth image are not part of the color image.

Consequently, you should

  1. crop color image in width to depth image
  2. crop depth image in height to color image
  3. resample either color or depth image using imresize


来源:https://stackoverflow.com/questions/44863359/how-can-i-convert-color-stream-1920x1080-into-depth-stream512x424-in-kinect

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!