Detecting object regions in image opencv

微笑、不失礼 提交于 2019-11-29 20:30:28

Here's what I tried.

In the images, the background is mostly greenish and the area of the background is considerably larger than that of the foreground. So, if you take a color histogram of the image, the greenish bins will have higher values. Threshold this histogram so that bins having smaller values are set to zero. This way we'll most probably retain the greenish (higher value) bins and discard other colors. Then backproject this histogram. The backprojection will highlight these greenish regions in the image.

Backprojection:

  • Then threshold this backprojection. This gives us the background.

Background (after some morphological filtering):

  • Invert the background to get foreground.

Foreground (after some morphological filtering):

  • Then find the contours of the foreground.

I think this gives a reasonable segmentation, and using this as mask you may be able to use a segmentation like GrabCut to refine the boundaries (I haven't tried this yet).

EDIT: I tried the GrabCut approach and it indeed refines the boundaries. I've added the code for GrabCut segmentation.

Contours:

GrabCut segmentation using the foreground as mask:

I'm using the OpenCV C API for the histogram processing part.

// load the color image
IplImage* im = cvLoadImage("bFly6.jpg");

// get the color histogram
IplImage* im32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 3);
cvConvertScale(im, im32f);

int channels[] = {0, 1, 2};
int histSize[] = {32, 32, 32};
float rgbRange[] = {0, 256};
float* ranges[] = {rgbRange, rgbRange, rgbRange};

CvHistogram* hist = cvCreateHist(3, histSize, CV_HIST_ARRAY, ranges);
IplImage* b = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* g = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* r = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* backproject32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 1);
IplImage* backproject8u = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplImage* bw = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplConvKernel* kernel = cvCreateStructuringElementEx(3, 3, 1, 1, MORPH_ELLIPSE);

cvSplit(im32f, b, g, r, NULL);
IplImage* planes[] = {b, g, r};
cvCalcHist(planes, hist);

// find min and max values of histogram bins
float minval, maxval;
cvGetMinMaxHistValue(hist, &minval, &maxval);

// threshold the histogram. this sets the bin values that are below the threshold to zero
cvThreshHist(hist, maxval/32);

// backproject the thresholded histogram. backprojection should contain higher values for the
// background and lower values for the foreground
cvCalcBackProject(planes, backproject32f, hist);

// convert to 8u type
double min, max;
cvMinMaxLoc(backproject32f, &min, &max);
cvConvertScale(backproject32f, backproject8u, 255.0 / max);

// threshold backprojected image. this gives us the background
cvThreshold(backproject8u, bw, 10, 255, CV_THRESH_BINARY);

// some morphology on background
cvDilate(bw, bw, kernel, 1);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_CLOSE, 2);

// get the foreground
cvSubRS(bw, cvScalar(255, 255, 255), bw);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_OPEN, 2);
cvErode(bw, bw, kernel, 1);

// find contours of the foreground
//CvMemStorage* storage = cvCreateMemStorage(0);
//CvSeq* contours = 0;
//cvFindContours(bw, storage, &contours);
//cvDrawContours(im, contours, CV_RGB(255, 0, 0), CV_RGB(0, 0, 255), 1, 2);

// grabcut
Mat color(im);
Mat fg(bw);
Mat mask(bw->height, bw->width, CV_8U);

mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, fg);

Mat bgdModel, fgdModel;
grabCut(color, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);

Mat gcfg = mask == GC_PR_FGD;

vector<vector<cv::Point>> contours;
vector<Vec4i> hierarchy;
findContours(gcfg, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
for(int idx = 0; idx < contours.size(); idx++)
{
    drawContours(color, contours, idx, Scalar(0, 0, 255), 2);
}

// cleanup ...

UPDATE: We can do the above using the C++ interface as shown below.

const int channels[] = {0, 1, 2};
const int histSize[] = {32, 32, 32};
const float rgbRange[] = {0, 256};
const float* ranges[] = {rgbRange, rgbRange, rgbRange};

Mat hist;
Mat im32fc3, backpr32f, backpr8u, backprBw, kernel;

Mat im = imread("bFly6.jpg");

im.convertTo(im32fc3, CV_32FC3);
calcHist(&im32fc3, 1, channels, Mat(), hist, 3, histSize, ranges, true, false);
calcBackProject(&im32fc3, 1, channels, hist, backpr32f, ranges);

double minval, maxval;
minMaxIdx(backpr32f, &minval, &maxval);
threshold(backpr32f, backpr32f, maxval/32, 255, THRESH_TOZERO);
backpr32f.convertTo(backpr8u, CV_8U, 255.0/maxval);
threshold(backpr8u, backprBw, 10, 255, THRESH_BINARY);

kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));

dilate(backprBw, backprBw, kernel);
morphologyEx(backprBw, backprBw, MORPH_CLOSE, kernel, Point(-1, -1), 2);

backprBw = 255 - backprBw;

morphologyEx(backprBw, backprBw, MORPH_OPEN, kernel, Point(-1, -1), 2);
erode(backprBw, backprBw, kernel);

Mat mask(backpr8u.rows, backpr8u.cols, CV_8U);

mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, backprBw);

Mat bgdModel, fgdModel;
grabCut(im, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);

Mat fg = mask == GC_PR_FGD;

I would consider a few options. My assumption is that the camera does not move. I haven't used the images or written any code, so this is mostly from experience.

  • Rather than just looking for edges, try separating the background using a segmentation algorithm. Mixture of Gaussian can help with this. Given a set of images over the same region (i.e. video), you can cancel out regions which are persistent. Then, new items such as instruments will pop out. Connected components can then be used on the blobs.

    • I would look at segmentation algorithms to see if you can optimize the conditions to make this work for you. One major item is to make sure your camera is stable or you stabilize the images yourself pre-processing.
  • I would consider using interest points to identify regions in the image with a lot of new material. Given that the background is relatively plain, small objects such as needles will create a bunch of interest points. The towel should be much more sparse. Perhaps overlaying the detected interest points over the connected component footprint will give you a "density" metric which you can then threshold. If the connected component has a large ratio of interest points for the area of the item, then it is an interesting object.

    • On this note, you can even clean up the connected component footprint by using a Convex Hull to prune the objects you have detected. This may help situations such as a medical instrument casting a shadow on the towel which stretches the component region. This is a guess, but interest points can definitely give you more information than just edges.
  • Finally, given that you have a stable background with clear objects in view, I would take a look at Bag-of-Features to see if you can just detect each individual object in the image. This may be useful since there seems to be a consistent pattern to the objects in these images. You can build a big database of images such as needles, gauze, scissors, etc. Then BoF, which is in OpenCV will find those candidates for you. You can also mix it in with other operations you are doing to compare results.

  • -
Tejash L

I would also suggest an idea to your initial version. You can also skip the contours, whose regions have width and height greater than the half the image width and height.

//take the rect of the contours

Rect rect = Imgproc.boundingRect(contours.get(i));

if (rect.width < inputImageWidth / 2 && rect.height < inputImageHeight / 2)

//then continue to draw or use for next purposes.
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!