Kinect 1.8 colorframe and depthframe not coordinated

大憨熊 提交于 2020-01-24 10:30:48

问题


My program has a problem with poor coordination between the depth and color images.

The player mask is not in the same place as the person (see the picture below).

void _AllFreamReady(object sender, AllFramesReadyEventArgs e)
{
    using (ColorImageFrame _colorFrame = e.OpenColorImageFrame())
    {
        if (_colorFrame == null)  //jezeli pusta ramka nie rob nic
        {
            return;
        }
        byte[] _pixels = new byte[_colorFrame.PixelDataLength]; //utworzenie tablicy pixeli dla 1 ramki obrazu o rozmiarach przechwyconej ramki z strumienia 
        _colorFrame.CopyPixelDataTo(_pixels);                   //kopiujemy pixele do tablicy
        int _stride = _colorFrame.Width * 4;                    //Kazdy pixel moze miec 4 wartosci Red Green Blue lub pusty
        image1.Source =
            BitmapSource.Create(_colorFrame.Width, _colorFrame.Height,
            96, 96, PixelFormats.Bgr32, null, _pixels, _stride);

        if (_closing)
        {
            return;
        }

        using (DepthImageFrame _depthFrame = e.OpenDepthImageFrame())
        {
            if (_depthFrame == null)
            {
                return;
            }

            byte[] _pixelsdepth = _GenerateColoredBytes(_depthFrame,_pixels);
            int _dstride = _depthFrame.Width * 4;
            image3.Source =
                BitmapSource.Create(_depthFrame.Width, _depthFrame.Height,
                96, 96, PixelFormats.Bgr32, null, _pixelsdepth, _dstride);
        }
    }               
}

private byte[] _GenerateColoredBytes(DepthImageFrame _depthFrame, byte[] _pixels)
{
    short[] _rawDepthData = new short[_depthFrame.PixelDataLength];
    _depthFrame.CopyPixelDataTo(_rawDepthData);
    Byte[] _dpixels = new byte[_depthFrame.Height * _depthFrame.Width * 4];
    const int _blueindex = 0;
    const int _greenindex = 1;
    const int _redindex = 2;

    for (int _depthindex = 0, _colorindex = 0;
        _depthindex < _rawDepthData.Length && _colorindex < _dpixels.Length;
        _depthindex++, _colorindex += 4)
    {
        int _player = _rawDepthData[_depthindex] & DepthImageFrame.PlayerIndexBitmaskWidth;

        if (_player > 0)
        {
            _dpixels[_colorindex + _redindex] = _pixels[_colorindex + _redindex]; 
            _dpixels[_colorindex + _greenindex] = _pixels[_colorindex + _greenindex];
            _dpixels[_colorindex + _blueindex] = _pixels[_colorindex + _blueindex];

        };
    }

    return _dpixels;
}


回答1:


RGB and depth data are not aligned. This is due to the position of depth sensor and RGB camera in the Kinect case: they are different, so you cannot expect aligned images using different points of view.

However, you problem is quite common, and was solved by the KinectSensor.MapDepthFrameToColorFrame, that was deprecated after SDK 1.6. Now, what you need is the CoordinateMapper.MapDepthFrameToColorFrame method.

The Coordinate Mapping Basics-WPF C# Sample shows how to use this method. You can find some significant parts of the code in the following:

// Intermediate storage for the depth data received from the sensor
private DepthImagePixel[] depthPixels;
// Intermediate storage for the color data received from the camera
private byte[] colorPixels;
// Intermediate storage for the depth to color mapping
private ColorImagePoint[] colorCoordinates;
// Inverse scaling factor between color and depth
private int colorToDepthDivisor;
// Format we will use for the depth stream
private const DepthImageFormat DepthFormat = DepthImageFormat.Resolution320x240Fps30;
// Format we will use for the color stream
private const ColorImageFormat ColorFormat = ColorImageFormat.RgbResolution640x480Fps30;

//...

// Initialization
this.colorCoordinates = new ColorImagePoint[this.sensor.DepthStream.FramePixelDataLength];
this.depthWidth = this.sensor.DepthStream.FrameWidth;
this.depthHeight = this.sensor.DepthStream.FrameHeight;
int colorWidth = this.sensor.ColorStream.FrameWidth;
int colorHeight = this.sensor.ColorStream.FrameHeight;
this.colorToDepthDivisor = colorWidth / this.depthWidth;
this.sensor.AllFramesReady += this.SensorAllFramesReady;

//...

private void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e)
{
    // in the middle of shutting down, so nothing to do
    if (null == this.sensor)
    {
        return;
    }

    bool depthReceived = false;
    bool colorReceived = false;

    using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
    {
        if (null != depthFrame)
        {
            // Copy the pixel data from the image to a temporary array
            depthFrame.CopyDepthImagePixelDataTo(this.depthPixels);

            depthReceived = true;
        }
    }

    using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
    {
        if (null != colorFrame)
        {
            // Copy the pixel data from the image to a temporary array
            colorFrame.CopyPixelDataTo(this.colorPixels);

            colorReceived = true;
        }
    }

    if (true == depthReceived)
    {
        this.sensor.CoordinateMapper.MapDepthFrameToColorFrame(
            DepthFormat,
            this.depthPixels,
            ColorFormat,
            this.colorCoordinates);

        // ...

        int depthIndex = x + (y * this.depthWidth);
        DepthImagePixel depthPixel = this.depthPixels[depthIndex];

        // scale color coordinates to depth resolution
        int X = colorImagePoint.X / this.colorToDepthDivisor;
        int Y = colorImagePoint.Y / this.colorToDepthDivisor;

        // depthPixel is the depth for the (X,Y) pixel in the color frame
    }
}



回答2:


I am working on this problem myself. I agree with VitoShadow that one solution is in the coordinate mapping, but a section not posted where the ratio between the miss matched depth and color screen resolutions(this.colorToDepthDivisor = colorWidth / this.depthWidth;). This is used with a shift of the data (this.playerPixelData[playerPixelIndex - 1] = opaquePixelValue;) to account for the miss match.

Unfortunately, this can create a border around the masked image where the depthframe isn't stretched to the edge of the color frame. I am trying to not use skeleton mapping and am optimizing my code by tracking depthdata with emgu cv to pass a point as the center of the ROI of the colorframe. I am still working on it.



来源:https://stackoverflow.com/questions/29682863/kinect-1-8-colorframe-and-depthframe-not-coordinated

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!