sharpen image to detect edges/lines in a stamped “X” object on paper

喜夏-厌秋 提交于 2019-12-05 00:16:39


I'm using python & opencv. My goal is to detect "X" shaped pieces in an image taken with a raspberry pi camera. The project is that we have pre-printed tic-tac-toe boards, and must image the board every time a new piece is laid onto the board (with ink stamps). Then the output says what type of piece, if any, is in what section of the tic-tac-toe board.

Here, I have the lines I have detected in the image in green:

As you can see, the "X" shaped pieces seems to not be easily detected. Only one line on some of the stamps gets "seen."

Here's what the edge detection looks like after the filters:

My method for detecting the "X" shaped piece is to check in each section for any lines with a non-horizontal/vertical slope. My problem is that the "X" shaped stamps are not perfect lines; thus, my code hardly picks up on the lines.

I have tried applying an unsharp filter, using histogram equalization, and just using grayscale into edge detection. None of these have detected more than 1 line in any "X" shaped piece.

Roughly what I am doing:

#sharpen image using blur and unsharp method
gaussian_1 = cv2.GaussianBlur(image, (9,9), 10.0)
unsharp_image = cv2.addWeighted(image, 1.5, gaussian_1, -0.5, 0, image)
#apply filter to find stamp pieces, histogram equalization on greyscale
hist_eq = cv2.equalizeHist(unsharp_image)
#edge detection (input,threshold1,threshold2,size_for_sobel_operator)
edges = cv2.Canny(hist_eq,50,150,apertureSize = 3)
#find lines (edges,min_pixels,min_degrees,min_intersections,lineLength,LineGap)
lines = cv2.HoughLinesP(edges,1,np.pi/180,50,minLineLength,maxLineGap)

Only I'm applying this to each of the 9 sections of the board individually, but that's not really important.

TLDR: How can I make my image so that my lines are "crisp" and sharp? I would like to know what I can use to make a stamped "X" look like a few lines.


You can try Canny edge detector with Otsu's robust method for determining the dual threshold value.

im = cv2.imread('9WJTNaZ.jpg', 0)
th, bw = cv2.threshold(im, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
edges = cv2.Canny(im, th/2, th)

Then you can use

  • convexity defects of the contours


  • the area of the filled contour to the area of the bounding box of the contour

to differentiate the cross marks from circles.

This is what I get when I apply Canny to your image.


Since you're using ink stamps, implementing an edge detection method and then later some kind of character recognition method (?) is a tough way to go.

Have you tried using a simple connected components algorithm? Even with the lighting variation seen in your image, a bit of tinkering with a few standard binarization techniques should yield reasonable results.

Once you have your components, you'll have data about moments, perimeter lengths, and so on that should lead you quickly to a calculation to distinguish the two kinds of marks.

Whatever technique you use, consider reducing the image size first so that you have fewer pixels to process. You may notice some other benefits to creating a smaller image.

And if you can, add a small diffuse light to your camera. This should make your programming task easier and detection more robust.