How to delete or clear contours from image?

佐手、 提交于 2020-06-16 04:18:25

问题


I'm working with license plates, what I do is apply a series of filters to it, such as:

  1. Grayscale
  2. Blur
  3. Threshhold
  4. Binary

The problem is when I doing this, there are some contour like this image at borders, how can I clear them? or make it just black color (masked)? I used this code but sometimes it falls.

# invert image and detect contours
inverted = cv2.bitwise_not(image_binary_and_dilated)
contours, hierarchy = cv2.findContours(inverted,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

# get the biggest contour
biggest_index = -1
biggest_area = -1
i = 0
for c in contours:
    area = cv2.contourArea(c)
    if area > biggest_area:
        biggest_area = area
        biggest_index = i
    i = i+1

print("biggest area: " + str(biggest_area) + " index: " + str(biggest_index))

cv2.drawContours(image_binary_and_dilated, contours, biggest_index, [0,0,255])
center, size, angle = cv2.minAreaRect(contours[biggest_index])

rot_mat = cv2.getRotationMatrix2D(center, angle, 1.)

#cv2.warpPerspective()
print(size)
dst = cv2.warpAffine(inverted, rot_mat, (int(size[0]), int(size[1])))

mask = dst * 0
x1 = max([int(center[0] - size[0] / 2)+1, 0])
y1 = max([int(center[1] - size[1] / 2)+1, 0])
x2 = int(center[0] + size[0] / 2)-1
y2 = int(center[1] + size[1] / 2)-1

point1 = (x1, y1)
point2 = (x2, y2)
print(point1)
print(point2)

cv2.rectangle(dst, point1, point2, [0,0,0])
cv2.rectangle(mask, point1, point2, [255,255,255], cv2.FILLED)

masked = cv2.bitwise_and(dst, mask)

#cv2_imshow(imgg)
cv2_imshow(dst)
cv2_imshow(masked)
#cv2_imshow(mask)

Some results:

The original plates were:

  1. Good result 1
  2. Good result 2
  3. Good result 3
  4. Good result 4
  5. Bad result 1
  6. Bad result 2

Binary plates are:

  1. Image 1
  2. Image 2
  3. Image 3
  4. Image 4
  5. Image 5 - Bad result 1
  6. Image 6 - Bad result 2

How can I fix this code? only that I want to avoid that bad result or improve it.


回答1:


INTRODUCTION

What you are asking starts to become complicated, and I believe there is not anymore a right or wrong answer, just different ways to do this. Almost all of them will yield positive and negative results, most likely in a different ratio. Having a 100% positive result is quite a challenging task, and I do believe my answer does not reach it. Yet it can be the basis for a more sophisticated work towards that goal.

MY PROPOSAL

So, I want to make a different proposal here. I am not 100% sure why you are doing all the steps, and I believe some of them could be unnecessary. Let's start from the problem: you want to remove the white parts on the borders (which are not numbers). So, we need an idea about how to distinguish them from the letters, in order to correctly tackle them. If we just try to contour and warp, it is likely to work on some images and not on others, because not all of them look the same. This is the hardest problem to have a general solution that works for many images.

What are the difference between the characteristics of the numbers and the characteristics of the borders (and other small points?): after thinking about that, I would say: the shapes! That meaning, if you would imagine a bounding box around a letter/number, it would look like a rectangle, whose size is related to the image size. While in the case of the border, they are usually very large and narrow, or too small to be considered a letter/number (random points).

Therefore, my guess would be on segmentation, dividing the features via their shape. So we take the binary image, we remove some parts using the projection on their axes (as you correctly asked in the previous question and I believe we should use) and we get an image where each letter is separated from the white borders. Then we can segment and check the shape of each segmented object, and if we think these are letters, we keep them, otherwise we discard them.

THE CODE

I wrote the code before as an example on your data. Some of the parameters are tuned on this set of images, so they may have to be relaxed for a larger dataset.

import cv2
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import scipy.ndimage as ndimage

# do this for all the images
num_images = 6
plt.figure(figsize=(16,16))
for k in range(num_images):

    # read the image
    binary_image = cv2.imread("binary_image/img{}.png".format(k), cv2.IMREAD_GRAYSCALE)
    # just for visualization purposes, I create another image with the same shape, to show what I am doing
    new_intermediate_image = np.zeros((binary_image.shape), np.uint8)
    new_intermediate_image += binary_image
    # here we will copy only the cleaned parts
    new_cleaned_image = np.zeros((binary_image.shape), np.uint8)

    ### THIS CODE COMES FROM THE PREVIOUS ANSWER: 
    # https://stackoverflow.com/questions/62127537/how-to-clean-binary-image-using-horizontal-projection?noredirect=1&lq=1
    (rows,cols)=binary_image.shape
    h_projection = np.array([ x/rows for x in binary_image.sum(axis=0)])
    threshold_h = (np.max(h_projection) - np.min(h_projection)) / 10
    print("we will use threshold {} for horizontal".format(threshold))
    # select the black areas
    black_areas_horizontal = np.where(h_projection < threshold_h)
    for j in black_areas_horizontal:
        new_intermediate_image[:, j] = 0

    v_projection = np.array([ x/cols for x in binary_image.sum(axis=1)])
    threshold_v = (np.max(v_projection) - np.min(v_projection)) / 10
    print("we will use threshold {} for vertical".format(threshold_v))
    black_areas_vertical = np.where(v_projection < threshold_v)
    for j in black_areas_vertical:
        new_intermediate_image[j, :] = 0
    ### UNTIL HERE

    # define the features we are looking for
    # this parameters can also be tuned
    min_width = binary_image.shape[1] / 14
    max_width = binary_image.shape[1] / 2
    min_height = binary_image.shape[0] / 5
    max_height = binary_image.shape[0]
    print("we look for feature with width in [{},{}] and height in [{},{}]".format(min_width, max_width, min_height, max_height))
    # segment the iamge
    labeled_array, num_features = ndimage.label(new_intermediate_image)

    # loop over all features found
    for i in range(num_features):
        # get a bounding box around them
        slice_x, slice_y = ndimage.find_objects(labeled_array==i)[0]
        roi = labeled_array[slice_x, slice_y]
        # check the shape, if the bounding box is what we expect, copy it to the new image
        if roi.shape[0] > min_height and \
            roi.shape[0] < max_height and \
            roi.shape[1] > min_width and \
            roi.shape[1] < max_width:
            new_cleaned_image += (labeled_array == i)

    # print all images on a grid
    plt.subplot(num_images,3,1+(k*3))
    plt.imshow(binary_image)
    plt.subplot(num_images,3,2+(k*3))
    plt.imshow(new_intermediate_image)
    plt.subplot(num_images,3,3+(k*3))
    plt.imshow(new_cleaned_image)

that produces the output (in the grid, left image are the input images, central one are the images after the mask based on histogram projections, and on the right are the cleaned images):

CONCLUSIONS:

As said above, this method does not yield 100% positive results. The last picture has lower quality and some parts are unconnected, and they are lost in the process. I personally believe this is a price to pay to get cleaner image, and if you have a lot of images, it won't be a problem, and you can remove those kind of images. Overall, I think this method returns quite clear images, where all other parts that are not letters or numbers are correctly removed.

ADVANTAGES

  • the image is clean, nothing more than letters or numbers are kept

  • the parameters can be tuned, and should be consistent across images

  • in case of problem, using some prints or some debugging on the loop that chooses the features to keep should make it easier to understand where are the problem and correct them

LIMITATIONS

  • it may fail in some cases where letters and numbers touch the white borders, which seems quite possible. It is handled from the black_areas created using the projection, but I am not so confident this will work 100% of the time.

  • some small parts of the numbers can be lost during the process, as in the last picture.



来源:https://stackoverflow.com/questions/62190121/how-to-delete-or-clear-contours-from-image

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!