Measuring the diameter pictures of holes in metal parts, photographed with telecentric, monochrome camera with opencv

末鹿安然 提交于 2020-01-02 03:47:45

问题


Setup:

  • Camera: Blackfly S Mono 20.0 MP
  • Lens: Opto telecentric lens TC23080
  • Lights: 16 green LEDS
  • Python: 3.7.3
  • openCV: 4.0+

Sorry for the image links, but one image is around 20MB, also did not want to loose any quality

Image samples:

https://drive.google.com/file/d/11PU-5fzvSJt1lKlmP-lQXhdsuCJPGKbN/view?usp=sharing https://drive.google.com/file/d/1B3lSFx8YvTYv3hzuuuYtphoHBuyEdc4o/view

Case: There will be metal parts with different shapes from 5x5 to 10x10 size(cm). Inside these metal parts there are plenty of circular holes from 2 to 10~ that have to be detected very accurately. The actual size of holes are unknown, as there are huge variety of possible parts. The goal is to write a generic algorithm with OpenCV, that could work with any metal parts and detect circular holes.

What we have tried: We have tried to detect the holes with HoughCircles algorithm with little to no success. The algorithm is either too sensitive, or it does not detect the holes at all. We have experimented with different param1 and param2 values with no success. We have also tried blurring the image and passing it through Canny before using HoughCircles, but such an approach did not produce better results. The very same algorithm works significantly better with lower resolution pictures. However, resolution cannot be sacrificed as accuracy is extremely important in this project.

https://drive.google.com/file/d/1TRdDbperi37bha0uJVALS4C2dBuaNz6u/view?usp=sharing

The above circles were detected with the following parameters:

minradius=0
maxradius=0
dp=1
param1=100
param2=21

By playing around with the above parameters, we can get almost the results that we want. The problem arises when we use the same parameters with different pictures.

The end result we want to get is the diameter of a given circle with great accuracy, and we want the same algorithm to be usable on different part pictures

What makes this problem different from the other ones posted is that we do not know the approximate radius of a given circle (so we cannot manipulate minradius, maxradius, param1, param2 or any other values).


回答1:


We know two things about these images:

  1. The objects are dark, on a bright background.
  2. The holes are all circles, and we want to measure all holes.

So all we need to do is detect holes. This is actually quite trivial:

  1. threshold (background becomes the object, since it's bright)
  2. remove edge objects

what is left are the holes. Any holes touching the image edge will not be included. We can now easily measure these holes. Since we assume they're circular, we can do three things:

  1. Count object pixels, this is an unbiased estimate of the area. From the area we determine the hole diameter.
  2. Detect contours, find the centroid, then use e.g. the mean distance of the contour points to the centroid as the radius.
  3. Normalize the image intensities so the background illumination has an intensity of 1, and the object with the holes in it has an intensity of 0. The integral over the intensities for each hole is a sub-pixel--precision estimate of the area (see at the bottom for a quick explanation of this method).

This Python code, using DIPlib (I'm an author) shows how to do these three approaches:

import PyDIP as dip
import numpy as np

img = dip.ImageRead('geriausias.bmp')
img.SetPixelSize(dip.PixelSize(dip.PhysicalQuantity(1,'um'))) # Usually this info is in the image file
bin, thresh = dip.Threshold(img)
bin = dip.EdgeObjectsRemove(bin)
bin = dip.Label(bin)
msr = dip.MeasurementTool.Measure(bin, features=['Size','Radius'])
print(msr)
d1 = np.sqrt(np.array(msr['Size'])[:,0] * 4 / np.pi)
print("method 1:", d1)
d2 = np.array(msr['Radius'])[:,1] * 2
print("method 2:", d2)

bin = dip.Dilation(bin, 10) # we need larger regions to average over so we take all of the light
                            # coming through the hole into account.
img = (dip.ErfClip(img, thresh, thresh/4, "range") - (thresh*7/8)) / (thresh/4)
msr = dip.MeasurementTool.Measure(bin, img, features=['Mass'])
d3 = np.sqrt(np.array(msr['Mass'])[:,0] * 4 / np.pi)
print("method 3:", d3)

This gives the output:

  |       Size |                                            Radius | 
- | ---------- | ------------------------------------------------- | 
  |            |        Max |       Mean |        Min |     StdDev | 
  |      (µm²) |       (µm) |       (µm) |       (µm) |       (µm) | 
- | ---------- | ---------- | ---------- | ---------- | ---------- | 
1 |  6.282e+04 |      143.9 |      141.4 |      134.4 |      1.628 | 
2 |  9.110e+04 |      171.5 |      170.3 |      168.3 |     0.5643 | 
3 |  6.303e+04 |      143.5 |      141.6 |      133.9 |      1.212 | 
4 |  9.103e+04 |      171.6 |      170.2 |      167.3 |     0.6292 | 
5 |  6.306e+04 |      143.9 |      141.6 |      126.5 |      2.320 | 
6 |  2.495e+05 |      283.5 |      281.8 |      274.4 |     0.9805 | 
7 |  1.176e+05 |      194.4 |      193.5 |      187.1 |     0.6303 | 
8 |  1.595e+05 |      226.7 |      225.3 |      219.8 |     0.8629 | 
9 |  9.063e+04 |      171.0 |      169.8 |      167.6 |     0.5457 | 

method 1: [282.8250363  340.57242408 283.28834869 340.45277017 283.36249824
 563.64770132 386.9715443  450.65294139 339.70023023]
method 2: [282.74577033 340.58808144 283.24878097 340.43862835 283.1641869
 563.59706479 386.95245928 450.65392268 339.68617582]
method 3: [282.74836803 340.56787463 283.24627163 340.39568372 283.31396961
 563.601641   386.89884807 450.62167913 339.68954136]

The image bin, after calling dip.Label, is an integer image where pixels for hole 1 all have value 1, those for hole 2 have value 2, etc. So we still keep the relationship between measured sizes and which holes they were. I have not bothered making a markup image showing the sizes on the image, but this can easily be done as you've seen in other answers.

Because there is no pixel size information in the image files, I've imposed 1 micron per pixel. This is likely not correct, you will have to do a calibration to obtain pixel size information.

A problem here is that the background illumination is too bright, giving saturated pixels. This causes the holes to appear larger than they actually are. It is important to calibrate the system so that the background illumination is close to the maximum that can be recorded by the camera, but not at that maximum nor above. For example, try to get the background intensity to be 245 or 250. The 3rd method is most affected by bad illumination.

For the second image, the brightness is very low, giving a more noisy image than necessary. I needed to modify the line bin = dip.Label(bin) into:

bin = dip.Label(bin, 2, 500) # Imposing minimum object size rather than filtering

It's maybe easier to do some noise filtering instead. The output was:

  |       Size |                                            Radius | 
- | ---------- | ------------------------------------------------- | 
  |            |        Max |       Mean |        Min |     StdDev | 
  |      (µm²) |       (µm) |       (µm) |       (µm) |       (µm) | 
- | ---------- | ---------- | ---------- | ---------- | ---------- | 
1 |  4.023e+06 |      1133. |      1132. |      1125. |     0.4989 | 

method 1: [2263.24621554]
method 2: [2263.22724164]
method 3: [2262.90068056]

Quick explanation of method #3

The method is described in the PhD thesis of Lucas van Vliet (Delft University of Technology, 1993), chapter 6.

Think of it this way: the amount of light that comes through the hole is proportional to the area of the hole (actually it is given by 'area' x 'light intensity'). By adding up all the light that comes through the hole, we know the area of the hole. The code adds up all pixel intensities for the object as well as some pixels just outside the object (I'm using 10 pixels there, how far out to go depends on the blurring).

The erfclip function is called a "soft clip" function, it ensures that the intensity inside the hole is uniformly 1, and the intensity outside the hole is uniformly 0, and only around the edges it leaves intermediate gray-values. In this particular case, this soft clip avoids some issues with offsets in the imaging system, and poor estimates of the light intensity. In other cases it is more important, avoiding issues with uneven color of the objects being measured. It also reduces the influence of noise.




回答2:



Here's an approach

  • Convert image to grayscale and Gaussian blur
  • Adaptive threshold
  • Perform morphological transformations to smooth/filter image
  • Find contours
  • Find perimeter of contour and perform contour approximation
  • Obtain bounding rectangle and centroid to get diameter

After finding contours, we perform contour approximation. The idea is that if the approximated contour has three vertices, then it must be a triangle. Similarly, if it has four, it must be a square or a rectangle. Therefore we can make the assumption that if it has greater than some number of vertices then it is a circle.

There are several ways to get the diameter, one way to find the bounding rectangle of the contour and use its width. Another way is to calculate it from the centroid coordinates.

import cv2

image = cv2.imread('1.bmp')

# Gray, blur, adaptive threshold
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]

# Morphological transformations
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5))
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)

# Find contours
cnts = cv2.findContours(opening, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]

for c in cnts:
    # Find perimeter of contour
    perimeter = cv2.arcLength(c, True)
    # Perform contour approximation
    approx = cv2.approxPolyDP(c, 0.04 * perimeter, True)

    # We assume that if the contour has more than a certain
    # number of verticies, we can make the assumption
    # that the contour shape is a circle
    if len(approx) > 6:

        # Obtain bounding rectangle to get measurements
        x,y,w,h = cv2.boundingRect(c)

        # Find measurements
        diameter = w
        radius = w/2

        # Find centroid
        M = cv2.moments(c)
        cX = int(M["m10"] / M["m00"])
        cY = int(M["m01"] / M["m00"])

        # Draw the contour and center of the shape on the image
        cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),4)
        cv2.drawContours(image,[c], 0, (36,255,12), 4)
        cv2.circle(image, (cX, cY), 15, (320, 159, 22), -1) 

        # Draw line and diameter information 
        cv2.line(image, (x, y + int(h/2)), (x + w, y + int(h/2)), (156, 188, 24), 3)
        cv2.putText(image, "Diameter: {}".format(diameter), (cX - 50, cY - 50), cv2.FONT_HERSHEY_SIMPLEX, 3, (156, 188, 24), 3)

cv2.imwrite('image.png', image)
cv2.imwrite('thresh.png', thresh)
cv2.imwrite('opening.png', opening)



回答3:


You can threshold the image and use findContours to find the contours of the holes and then fit circles to them with minEnclosingCircle. The fitted circles can be sanity checked by comparing them with the area of the contour.

import cv2 as cv
import math
import numpy as np
from matplotlib import pyplot as pl

gray = cv.imread('geriausias.bmp', cv.IMREAD_GRAYSCALE)
_,mask = cv.threshold(gray, 127, 255, cv.THRESH_BINARY)
contours,_ = cv.findContours(mask, cv.RETR_LIST, cv.CHAIN_APPROX_NONE)
contours = [contour for contour in contours if len(contour) > 15]
circles = [cv.minEnclosingCircle(contour) for contour in contours]
areas = [cv.contourArea(contour) for contour in contours]
radiuses = [math.sqrt(area / math.pi) for area in areas]

# Render contours blue and circles green.
canvas = cv.cvtColor(mask, cv.COLOR_GRAY2BGR)
cv.drawContours(canvas, contours, -1, (255, 0, 0), 10)
for circle, radius_from_area in zip(circles, radiuses):
    if 0.9 <= circle[1] / radius_from_area <= 1.1:  # Only allow 10% error in radius.
        p = (round(circle[0][0]), round(circle[0][1]))
        r = round(circle[1])
        cv.circle(canvas, p, r, (0, 255, 0), 10)
cv.imwrite('geriausias_circles.png', canvas)

canvas_small = cv.resize(canvas, None, None, 0.25, 0.25, cv.INTER_AREA)
cv.imwrite('geriausias_circles_small.png', canvas_small)

Circles that pass the sanity check are shown in green on top of all contours which are shown in blue.



来源:https://stackoverflow.com/questions/57297612/measuring-the-diameter-pictures-of-holes-in-metal-parts-photographed-with-telec

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!