For some reason whenever I use OpenCV's warpPerspective() function, the final warped image does not contain everything in the original image. The left part of the image seems to get cut off. I think the reason why this is happening is because the warped image is created at the leftmost position of the canvas for the warpPerspective(). Is there some way to fix this? Thanks
问题:
回答1:
The problem occurs because the homography maps part of the image to negative x,y values which are outside the image area so cannot be plotted. what we wish to do is to offset the warped output by some number of pixels to 'shunt' the entire warped image into positive coordinates(and hence inside the image area).
Homographies can be combined using matrix multiplication (which is why they are so powerful). If A and B are homographies, then AB represents the homography which applies B first, and then A.
Because of this all we need to do to offset the output is create the homography matrix for a translation by some offset, and then pre-multiply that by our original homography matrix
A 2D homography matrix looks like this :
[R11,R12,T1] [R21,R22,T2] [ P , P , 1]
where R represents a rotation matrix, T represents a translation, and P represents a perspective warp. And so a purely translational homography looks like this:
[ 1 , 0 , x_offset] [ 0 , 1 , y_offset] [ 0 , 0 , 1 ]
So just premultiply your homography by a matrix similar to the above, and your output image will be offset.
(Make sure you use matrix multiplication, not element wise multiplication!)
回答2:
The secret comes in two parts: the transform matrix (homography), and the resulting image size.
calculate a correct transform by using the getPerspectiveTransform(). Take 4 points from the original image, calculate their correct position in the destination, put them in two vectors in the same order, and use them to compute the perspective transform matrix.
Make sure the destination image size (third parameter for the warpPerspective()) is exactly what you want. Define it as Size(myWidth, myHeight).
回答3:
I have done one method... It is working.
perspectiveTransform(obj_corners,scene_corners,H); int maxCols(0),maxRows(0); for(int i=0;i
I just find the maximum of the x points and y points respectively and put it on
warpPerspective( tmp, transformedImage, homography, Size( maxCols, maxRows ) );
回答4:
Try the below homography_warp
.
void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst);
src
is the source image.
H
is your homography.
dst
is the warped image.
homography_warp
adjust your homography as described by https://stackoverflow.com/users/1060066/matt-freeman in his answer https://stackoverflow.com/a/8229116/15485
// Convert a vector of non-homogeneous 2D points to a vector of homogenehous 2D points. void to_homogeneous(const std::vector& non_homogeneous, std::vector& homogeneous) { homogeneous.resize(non_homogeneous.size()); for (size_t i = 0; i & homogeneous, std::vector& non_homogeneous) { non_homogeneous.resize(homogeneous.size()); for (size_t i = 0; i transform_via_homography(const std::vector<:point2f>& points, const cv::Matx33f& homography) { std::vector<:point3f> ph; to_homogeneous(points, ph); for (size_t i = 0; i r; from_homogeneous(ph, r); return r; } // Find the bounding box of a vector of 2D non-homogeneous points. cv::Rect_ bounding_box(const std::vector<:point2f>& p) { cv::Rect_ r; float x_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x x; float x_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.x x; float y_min = std::min_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y y; float y_max = std::max_element(p.begin(), p.end(), [](const cv::Point2f& lhs, const cv::Point2f& rhs) {return lhs.y y; return cv::Rect_(x_min, y_min, x_max - x_min, y_max - y_min); } // Warp the image src into the image dst through the homography H. // The resulting dst image contains the entire warped image, this // behaviour is the same of Octave's imperspectivewarp (in the 'image' // package) behaviour when the argument bbox is equal to 'loose'. // See http://octave.sourceforge.net/image/function/imperspectivewarp.html void homography_warp(const cv::Mat& src, const cv::Mat& H, cv::Mat& dst) { std::vector corners; corners.push_back(cv::Point2f(0, 0)); corners.push_back(cv::Point2f(src.cols, 0)); corners.push_back(cv::Point2f(0, src.rows)); corners.push_back(cv::Point2f(src.cols, src.rows)); std::vector projected = transform_via_homography(corners, H); cv::Rect_ bb = bounding_box(projected); cv::Mat_ translation = (cv::Mat_(3, 3)
回答5:
warpPerspective() works fine. No need to rewrite it. You probably use it incorrectly.
Remember the following tips:
- (0,0) pixels is not in the center but rather left-upper corner. So if you magnify the image x2 you will lose the lower and right parts, not the border (like in matlab).
- If you warp image twice it is better to multiply transformations and activate the function once.
- I think it works only on char/int matrices and not on float/double.
- When you have a transformation, first zoom/skew/rotation/perspective are applied and finally the translation. So if part of the image is missing just change the transation (two upper rows of last column) in the matrix.
回答6:
this is my solution
since third parameter in "warpPerspective()" is a transformation matrix,
we can make a transformation matrix , which moves the image backward first ,then rotates the image,finally moves the image forward .
In my case,I have a image with height of 160 px and width of 160 px. I want to rotate the image around [80,80] instead of around [0,0]
first,moves the image backward (that means T1)
then rotates the image (that means R)
finally moves the image forward (that means T2)
void rotateImage(Mat &src_img,int degree) { float radian=(degree/180.0)*M_PI; Mat R(3,3,CV_32FC1,Scalar(0)); R.at(0,0)=cos(radian);R.at(0,1)=-sin(radian); R.at(1,0)=sin(radian);R.at(1,1)=cos(radian); R.at(2,2)=1; Mat T1(3,3,CV_32FC1,Scalar(0)); T1.at(0,2)=-80; T1.at(1,2)=-80; T1.at(0,0)=1; T1.at(1,1)=1; T1.at(2,2)=1; Mat T2(3,3,CV_32FC1,Scalar(0)); T2.at(0,2)=80; T2.at(1,2)=80; T2.at(0,0)=1; T2.at(1,1)=1; T2.at(2,2)=1; std::cerr
回答7:
Here is a opencv-python solution for your problem, I put it on github: https://github.com/Sanster/notes/blob/master/opencv/warpPerspective.md
The key point is as user3094631 said, get two translation matrix(T1, T2) and apply to the Rotate matrix(M) T2*M*T1
In the code I give, T1 is from the center point of origin image, and T2 is from the left-top point of the transformed boundingBox. The transformed boundingBox comes from origin corner points:
height = img.shape[0] width = img.shape[1] #..get T1 #..get M pnts = np.asarray([ [0, 0], [width, 0], [width, height], [0, height] ], dtype=np.float32) pnts = np.array([pnts]) dst_pnts = cv2.perspectiveTransform(pnts, M * T1)[0] dst_pnts = np.asarray(dst_pnts, dtype=np.float32) bbox = cv2.boundingRect(dst_pnts) T2 = np.matrix([[1., 0., 0 - bbox[0]], [0., 1., 0 - bbox[1]], [0., 0., 1.]])