Opencv homography does not produce the required tranformation

旧巷老猫 提交于 2020-01-03 05:41:06

问题


I am trying to transform an image along the edge of the object (here the object is the book). Using canny edge detection, I am detecting the edges and from the score matrix, based on pixel value, I am choosing a random 4 coordinates lying on the edge for transformation. But the transformation is not as it thought it would be. What is the problem/Where am I missing out?

First I have sliced out a portion of the image. Then applied canny edge detection and randomly selected 4 edge coordinate points based on my own condition as: My original image is:

For experiment I have sliced out according to my need as:

The size of this image (61,160)

Now I need to transform the above image to make the edge of the book parallel to the horizontal axis.

img = cv2.imread('download1.jpg',0)
edges = cv2.Canny(img,100,200)
print(img.shape)
plt.show()
plt.imshow(img,cmap='gray')

l=[]
y_list=[]
k=1
for i in range (0,img.shape[0]):
  for j in range (0,img.shape[1]):
    if (edges[i][j]==255) and k<=4 and i>31 and j not in y_list:
      l.append([j,i])
      y_list.append(j)
      k+=1
      break

The edge detection image is obtained as:

The contents of l list are

[[49 32]
 [44 33]
 [40 34]
 [36 35]]

Then set the destination points given by list lt as:

[[49 61]
 [44 60]
 [40 61]
 [36 60]]

Then found out the homography matrix and used it to find out the warp perspective as :

h, status = cv2.findHomography(l,lt)
im_out = cv2.warpPerspective(img, h, (img.shape[1],img.shape[0]))

But it doesnot produce the required result! The resultant output image is obtained as:


回答1:


I faced a similar issue, and this is how I solved it (quite similar to your method actually), just I used get rotation matrix instead homografy:

  1. read image
  2. edge detector
  3. hough line to get all the lines (with an inclination inside a specific interval)

    lines = cv.HoughLinesP(img, 1, np.pi/180, 100, minLineLength=100, maxLineGap=10)
    
  4. get lines average inclination, cause in my case I had lot of parallel lines to use as references and in this way I was able to get a better result

     for line in lines:
         x1,y1,x2,y2 = line[0]
         if (x2-x1) != 0:
             angle = math.atan((float(y2-y1))/float((x2-x1))) * 180 / math.pi
         else:
             angle = 90
         #you can skip this test if you have no info about the lines you re looking for
         #in this case offset_angle is = 0
         if min_angle_threshold <= angle <= max_angle_threshold:
            tot_angle = tot_angle + angle
            cnt = cnt + 1
     average_angle = (tot_angle / cnt) - offset_angle 
    
  5. apply the counter-rotation

      center = your rotation center - probably the center of the image
      rotation_matrix = cv.getRotationMatrix2D(center, angle, 1.0)
      height, width = img.shape
      rotated_image = cv.warpAffine(img, rotation_matrix, (width, height))
    
     #do whatever you want, then rotate image back
     counter_rotation_matrix = cv.getRotationMatrix2D(center, -angle, 1.0)
     original_image = cv.warpAffine( rotated_image, counter_rotation_matrix, (width, height))
    

Edit: see the full example here:

    import math
    import cv2 as cv

    img = cv.imread('C:\\temp\\test_3.jpg',0)
    edges = cv.Canny(img,100,200)
    lines = cv.HoughLinesP(edges[0:50,:], 1, np.pi/180, 50, minLineLength=10, maxLineGap=10)
    tot_angle = 0
    cnt = 0
    for line in lines:
        x1,y1,x2,y2 = line[0]
        if (x2-x1) != 0:
            angle = math.atan((float(y2-y1))/float((x2-x1))) * 180 / math.pi
        else:
            angle = 90

        if -30 <= angle <= 30:
            tot_angle = tot_angle + angle
            cnt = cnt + 1
    average_angle = (tot_angle / cnt)
    h,w = img.shape[:2]
    center = w/2, h/2
    rotation_matrix = cv.getRotationMatrix2D(center, average_angle, 1.0)
    height, width = img.shape
    rotated_image = cv.warpAffine(img, rotation_matrix, (width, height))
    cv.imshow("roto", rotated_image)
    #do all your stuff here, add text and whatever
    #...
    #...
    counter_rotation_matrix = cv.getRotationMatrix2D(center, -average_angle, 1.0)
    original_image = cv.warpAffine( rotated_image, counter_rotation_matrix, (width, height))
    cv.imshow("orig", original_image)

rotated

]1

counter_rotated

]2

EDIT:

in case you want apply an homography(different than just a simple rotation, 'cause it also applies a perspective transformation), below the code to make it work:

#very basic example, similar to your code with fixed terms
l  = np.array([(11,32),(43,215),(142,1),(205,174)])
lt = np.array([(43,32),(43,215),(205,32),(205,215)])
h, status = cv.findHomography(l,lt)
im_out = cv.warpPerspective(img, h, (img.shape[1],img.shape[0]))

To do it programmatically - for "l" : just use houghlines as well and find the 4 corners, then add them

  • for "lt": find a "destination" for all the 4 points, for instance use the bottom corners as reference

    lines = cv.HoughLinesP(edges, 1, np.pi/180, 100, minLineLength=150, maxLineGap=5)
    l = []
    for line in lines:
        x1,y1,x2,y2 = line[0]
    
        if (x2-x1) != 0:
            angle = math.atan((float(y2-y1))/float((x2-x1))) * 180 / math.pi
        else:
            angle = 90
        # consider only vertical edges
        if 60 <= angle:
            l.append((x1,y1))
            l.append((x2,y2))
            x_values.append(max(x1,x2)) 
            if len(y_values) == 0:
                y_values.append(y1)
                y_values.append(y2)
    l  = np.array(l)
    lt = np.array([(x_values[0],y_values[0]),(x_values[0],y_values[1]),(x_values[1],y_values[0]),(x_values[1],y_values[1])])
    

then call findhomography as done above Hope it's clear enough!

3

来源:https://stackoverflow.com/questions/56578126/opencv-homography-does-not-produce-the-required-tranformation

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!