Background subtraction and Optical flow for tracking object in OpenCV C++

假装没事ソ 提交于 2019-12-07 08:01:14

问题


I am working on a project to detect object of interest using background subtraction and track them using optical flow in OpenCV C++. I was able to detect the object of interest using background subtraction. I was able to implement OpenCV Lucas Kanade optical flow on separate program. But, I am stuck at how to these two program in a single program. frame1 holds the actual frame from the video, contours2are the selected contours from the foreground object.

To summarize, how do I feed the forground object obtained from Background subtraction method to the calcOpticalFlowPyrLK? Or, help me if my approach is wrong. Thank you in advance.

Mat mask = Mat::zeros(fore.rows, fore.cols, CV_8UC1);
    drawContours(mask, contours2, -1, Scalar(255), 4, CV_FILLED);

    if (first_frame)
    {
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        fm0 = mask.clone();
        features_prev = features_next;
        first_frame = false;
    }
    else
    {           
        features_next.clear();
        if (!features_prev.empty())
        {
            calcOpticalFlowPyrLK(fm0, mask, features_prev, features_next, featuresFound, err, winSize, 3, termcrit, 0, 0.001);
            for (int i = 0; i < features_prev.size(); i++)
                line(frame1, features_prev[i], features_next[i], CV_RGB(0, 0, 255), 1, 8);
            imshow("final optical", frame1);
            waitKey(1);
        }
        goodFeaturesToTrack(mask, features_next, 1000, 0.01, 10, noArray(), 3, false, 0.04);
        features_prev = features_next;
        fm0 = mask.clone();         
    }

回答1:


Your approach of using optical flow for tracking is wrong. The idea behind optical flow approach is that a movning point in two consequtive images has at the start and endpoint the same pixel intensity. That means a motion for a feautre is estimated by observing its appearance from the start images and search for the structure in the end image (very simplified).

calcOpticalFlowPyrLK is a point tracker that means point in the previous images are tracked to the current one. Therefore the methods need the original gray valued image of your system. Because it only can estimate motion on structured / textured region ( you need x and y gradients in your image).

I think your code should do somethink like:

  1. Extract objects by background substraction (by contour) this is in the literature called a blob
  2. Extract objects in the next image and apply a blob-assoziation (which countour belong to whom) this is also called blob-tracken It is possible to do a blob-tracking with the calcOpticalFlowPyrLK. E.g. in a very simple way:
  3. Track points from the countour or a point inside the blob.
  4. Assoziation: The previous contour is one of the current if the points track, that belong to the previous contour are located at the current countour



回答2:


I think the output of background subtraction in OpenCV not Gray Scale image. for input Optical flow we need gray scale images.



来源:https://stackoverflow.com/questions/33118585/background-subtraction-and-optical-flow-for-tracking-object-in-opencv-c

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!