OpenCV warpperspective

后端 未结 9 2324
孤城傲影
孤城傲影 2020-12-01 05:45

For some reason whenever I use OpenCV\'s warpPerspective() function, the final warped image does not contain everything in the original image. The left part of the image see

相关标签:
9条回答
  • 2020-12-01 06:25

    An easy way to fix the issue of the warped image being projected outside the warping output is to translate the warped image to the right position. The main challenge lies in finding the correct offset for translation.

    The concept for translation is already discussed in the other answers given here so I will explain how to get the right offset. Idea is that matching features in two images should have the same coordinate in the final stitched image.

    Let's say we refer images as follows:

    • 'source image' (si): the image which needs to be warped
    • 'destination image' (di): the image to whose perspective 'source image' will be warped
    • 'warped source image'(wsi): source image after warping it to the destination image perspective

    Following is what you need to do in order to calculate offset for translation:

    1. After you have sampled the good matches and found the mask from homography, store the best match's keypoint(one with a minimum distance and being an inlier (should get the value of 1 in mask obtained from homography calculation)) in si and di. Let's say best match's keypoint in si and diisbm_siandbm_di` respectively.

      bm_si = [x1, y1,1]

      bm_di = [x2, y2, 1]

    2. Find the position of bm_si in wsi by simply multiplying it with the homography matrix (H). bm_wsi = np.dot(H,bm_si)

      bm_wsi = [x/bm_wsi[2] for x in bm_wsi]

    3. Depending on where you will be placing the di on the output of si warping (=wsi), adjust the bm_di

      Let's say if you are warping from the left image to right image (such that left image is si and the right image is di) then you will placing di on the right side wsi and hence bm_di[0] += si.shape[0]

    4. Now after the above steps

      x_offset = bm_di[0] - bm_si[0]

      y_offset = bm_di[1] - bm_si[1]

    5. Using calculated offset find the new homography matrix and warp the si.

      T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])

      translated_H = np.dot(T.H)

      wsi_frame_size = tuple(2*x for x in si.shape)

      stitched = cv2.warpPerspective(si, translated_H, wsi_frame_size)

      stitched[0:si.shape[0],si.shape[1]:] = di

    0 讨论(0)
  • 2020-12-01 06:25

    Here is a opencv-python solution for your problem, I put it on github: https://github.com/Sanster/notes/blob/master/opencv/warpPerspective.md

    The key point is as user3094631 said, get two translation matrix(T1, T2) and apply to the Rotate matrix(M) T2*M*T1

    In the code I give, T1 is from the center point of origin image, and T2 is from the left-top point of the transformed boundingBox. The transformed boundingBox comes from origin corner points:

    height = img.shape[0]
    width = img.shape[1]
    #..get T1
    #..get M
    pnts = np.asarray([
        [0, 0],
        [width, 0],
        [width, height],
        [0, height]
        ], dtype=np.float32)
    pnts = np.array([pnts])
    dst_pnts = cv2.perspectiveTransform(pnts, M * T1)[0]
    dst_pnts = np.asarray(dst_pnts, dtype=np.float32)
    bbox = cv2.boundingRect(dst_pnts)
    T2 = np.matrix([[1., 0., 0 - bbox[0]],
                    [0., 1., 0 - bbox[1]],
                    [0., 0., 1.]])
    
    0 讨论(0)
  • 2020-12-01 06:28

    Try the below homography_warp.

    void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst);
    

    src is the source image.

    H is your homography.

    dst is the warped image.

    homography_warp adjust your homography as described by https://stackoverflow.com/users/1060066/matt-freeman in his answer https://stackoverflow.com/a/8229116/15485

    // Convert a vector of non-homogeneous 2D points to a vector of homogenehous 2D points.
    void to_homogeneous(const std::vector< cv::Point2f >& non_homogeneous, std::vector< cv::Point3f >& homogeneous)
    {
        homogeneous.resize(non_homogeneous.size());
        for (size_t i = 0; i < non_homogeneous.size(); i++) {
            homogeneous[i].x = non_homogeneous[i].x;
            homogeneous[i].y = non_homogeneous[i].y;
            homogeneous[i].z = 1.0;
        }
    }
    
    // Convert a vector of homogeneous 2D points to a vector of non-homogenehous 2D points.
    void from_homogeneous(const std::vector< cv::Point3f >& homogeneous, std::vector< cv::Point2f >& non_homogeneous)
    {
        non_homogeneous.resize(homogeneous.size());
        for (size_t i = 0; i < non_homogeneous.size(); i++) {
            non_homogeneous[i].x = homogeneous[i].x / homogeneous[i].z;
            non_homogeneous[i].y = homogeneous[i].y / homogeneous[i].z;
        }
    }
    
    // Transform a vector of 2D non-homogeneous points via an homography.
    std::vector<cv::Point2f> transform_via_homography(const std::vector<cv::Point2f>& points, const cv::Matx33f& homography)
    {
        std::vector<cv::Point3f> ph;
        to_homogeneous(points, ph);
        for (size_t i = 0; i < ph.size(); i++) {
            ph[i] = homography*ph[i];
        }
        std::vector<cv::Point2f> r;
        from_homogeneous(ph, r);
        return r;
    }
    
    // Find the bounding box of a vector of 2D non-homogeneous points.
    cv::Rect_<float> bounding_box(const std::vector<cv::Point2f>& p)
    {
        cv::Rect_<float> r;
        float x_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
        float x_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x < rhs.x; })->x;
        float y_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
        float y_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y < rhs.y; })->y;
        return cv::Rect_<float>(x_min, y_min, x_max - x_min, y_max - y_min);
    }
    
    // Warp the image src into the image dst through the homography H.
    // The resulting dst image contains the entire warped image, this
    // behaviour is the same of Octave's imperspectivewarp (in the 'image'
    // package) behaviour when the argument bbox is equal to 'loose'.
    // See http://octave.sourceforge.net/image/function/imperspectivewarp.html
    void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst)
    {
        std::vector< cv::Point2f > corners;
        corners.push_back(cv::Point2f(0, 0));
        corners.push_back(cv::Point2f(src.cols, 0));
        corners.push_back(cv::Point2f(0, src.rows));
        corners.push_back(cv::Point2f(src.cols, src.rows));
    
        std::vector< cv::Point2f > projected = transform_via_homography(corners, H);
        cv::Rect_<float> bb = bounding_box(projected);
    
        cv::Mat_<double> translation = (cv::Mat_<double>(3, 3) << 1, 0, -bb.tl().x, 0, 1, -bb.tl().y, 0, 0, 1);
    
        cv::warpPerspective(src, dst, translation*H, bb.size());
    }
    
    0 讨论(0)
提交回复
热议问题