OpenCV : wrapPerspective on whole image

自古美人都是妖i 提交于 2021-02-07 07:18:03

问题


I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.

Right now I'm using

points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));

Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));

Which gives my these results (look at the right-bottom corner for result of warpPerspective):

photo 1photo 2photo 3

As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.

How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?

EDIT:

Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.

For example when I've doubled size using:

cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));

I've recieved these images:

photo 4photo 5


回答1:


Your code doesn't seem to be complete, so it is difficult to say what the problem is.

In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.

For example try to double the size:

cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));

Edit:

To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.

To make it more clear some sample code:

// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);

// calculate warped position of all corners

cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);

cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);

cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);

cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);

// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));

// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;

// adjust target points accordingly
for (int i=0; i<4; i++) {
    points2D[i] += cv::Point2f(x,y);
}

// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);

// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);



回答2:


There are two things you need to do:

  1. Increase the size of the output of cv2.warpPerspective
  2. Translate the warped source image such that the center of the warped source image matches with the center of cv2.warpPerspective output image

Here is how code will look:

# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)



回答3:


I implemented littleimp's answer in python in case anyone needs it. It should be noted that this will not work properly if the vanishing points of the polygons are falling within the image.

    import cv2
    import numpy as np
    from PIL import Image, ImageDraw
    import math
    
    
    def get_transformed_image(src, dst, img):
        # calculate the tranformation
        mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
        
            
        # new source: image corners
        corners = np.array([
                        [0, img.size[0]],
                        [0, 0],
                        [img.size[1], 0],
                        [img.size[1], img.size[0]]
                    ])
    
        # Transform the corners of the image
        corners_tranformed = cv2.perspectiveTransform(
                                      np.array([corners.astype("float32")]), mat)
    
        # These tranformed corners seems completely wrong/inverted x-axis 
        print(corners_tranformed)
        
        x_mn = math.ceil(min(corners_tranformed[0].T[0]))
        y_mn = math.ceil(min(corners_tranformed[0].T[1]))
    
        x_mx = math.ceil(max(corners_tranformed[0].T[0]))
        y_mx = math.ceil(max(corners_tranformed[0].T[1]))
    
        width = x_mx - x_mn
        height = y_mx - y_mn
    
        analogy = height/1000
        n_height = height/analogy
        n_width = width/analogy
    
    
        dst2 = corners_tranformed
        dst2 -= np.array([x_mn, y_mn])
        dst2 = dst2/analogy 
    
        mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
                                           dst2.astype("float32"))
    
    
        img_warp = Image.fromarray((
            cv2.warpPerspective(np.array(image),
                                mat2,
                                (int(n_width),
                                int(n_height)))))
        return img_warp
    
    
    # image coordingates
    src=  np.array([[ 789.72, 1187.35],
     [ 789.72, 752.75],
     [1277.35, 730.66],
     [1277.35,1200.65]])
    
    
    # known coordinates
    dst=np.array([[0, 1000],
                 [0, 0],
                 [1092, 0],
                 [1092, 1000]])
    
    # Create the image
    image = Image.new('RGB', (img_width, img_height))
    image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
    draw = ImageDraw.Draw(image)
    draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
    #image.show()
    
    warped =  get_transformed_image(src, dst, image)
    warped.show()


来源:https://stackoverflow.com/questions/19695702/opencv-wrapperspective-on-whole-image

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!