For some reason whenever I use OpenCV\'s warpPerspective() function, the final warped image does not contain everything in the original image. The left part of the image see
this is my solution
since third parameter in "warpPerspective()" is a transformation matrix,
we can make a transformation matrix , which moves the image backward first ,then rotates the image,finally moves the image forward .
In my case,I have a image with height of 160 px and width of 160 px. I want to rotate the image around [80,80] instead of around [0,0]
first,moves the image backward (that means T1)
then rotates the image (that means R)
finally moves the image forward (that means T2)
void rotateImage(Mat &src_img,int degree)
{
float radian=(degree/180.0)*M_PI;
Mat R(3,3,CV_32FC1,Scalar(0));
R.at<float>(0,0)=cos(radian);R.at<float>(0,1)=-sin(radian);
R.at<float>(1,0)=sin(radian);R.at<float>(1,1)=cos(radian);
R.at<float>(2,2)=1;
Mat T1(3,3,CV_32FC1,Scalar(0));
T1.at<float>(0,2)=-80;
T1.at<float>(1,2)=-80;
T1.at<float>(0,0)=1;
T1.at<float>(1,1)=1;
T1.at<float>(2,2)=1;
Mat T2(3,3,CV_32FC1,Scalar(0));
T2.at<float>(0,2)=80;
T2.at<float>(1,2)=80;
T2.at<float>(0,0)=1;
T2.at<float>(1,1)=1;
T2.at<float>(2,2)=1;
std::cerr<<T1<<std::endl;
std::cerr<<R<<std::endl;
std::cerr<<T2<<std::endl;
std::cerr<<T2*R*T1<<"\n"<<std::endl;
cv::warpPerspective(src_img, src_img, T2*R*T1, src_img.size(), cv::INTER_LINEAR);
}
warpPerspective() works fine. No need to rewrite it. You probably use it incorrectly.
Remember the following tips:
Matt's answer is a good start, and he is correct in saying you need to multiply your homography by
[ 1 , 0 , x_offset]
[ 0 , 1 , y_offset]
[ 0 , 0 , 1 ]
But he does not specify what x_offset and y_offset are. Other answers have said just take the perspective transform, but that is not correct. You want to take the INVERSE perspective transform.
Just because a point 0,0 transforms into, say, -10,-10, does not mean that shifting the image by 10,10 will result in a non-cropped image. This is because point 10,10 does not necessarily map into 0,0.
What you want to do is find out what point would map into 0,0, and shift the image by that much. To do that you take the inverse (cv2.invert) of the homography and apply perspectiveTransform.
does not imply:
You need to apply a reverse transform to find the correct points.
This will get the correct x_offset and y_offset to align your top left point. From there to find the correct bounding box and fit the entire image perfectly, you need to figure out the skew (how much the image slants left or up after your normal, non-inverse, transformation) and add that amount to your x_offset and y_offset as well.
EDIT: This is all theory. Images are a few pixels off in my tests, I'm not sure why.
The secret comes in two parts: the transform matrix (homography), and the resulting image size.
calculate a correct transform by using the getPerspectiveTransform(). Take 4 points from the original image, calculate their correct position in the destination, put them in two vectors in the same order, and use them to compute the perspective transform matrix.
Make sure the destination image size (third parameter for the warpPerspective()) is exactly what you want. Define it as Size(myWidth, myHeight).
The problem occurs because the homography maps part of the image to negative x,y values which are outside the image area so cannot be plotted. what we wish to do is to offset the warped output by some number of pixels to 'shunt' the entire warped image into positive coordinates(and hence inside the image area).
Homographies can be combined using matrix multiplication (which is why they are so powerful). If A and B are homographies, then AB represents the homography which applies B first, and then A.
Because of this all we need to do to offset the output is create the homography matrix for a translation by some offset, and then pre-multiply that by our original homography matrix
A 2D homography matrix looks like this :
[R11,R12,T1]
[R21,R22,T2]
[ P , P , 1]
where R represents a rotation matrix, T represents a translation, and P represents a perspective warp. And so a purely translational homography looks like this:
[ 1 , 0 , x_offset]
[ 0 , 1 , y_offset]
[ 0 , 0 , 1 ]
So just premultiply your homography by a matrix similar to the above, and your output image will be offset.
(Make sure you use matrix multiplication, not element wise multiplication!)
I have done one method... It is working.
perspectiveTransform(obj_corners,scene_corners,H);
int maxCols(0),maxRows(0);
for(int i=0;i<scene_corners.size();i++)
{
if(maxRows < scene_corners.at(i).y)
maxRows = scene_corners.at(i).y;
if(maxCols < scene_corners.at(i).x)
maxCols = scene_corners.at(i).x;
}
I just find the maximum of the x points and y points respectively and put it on
warpPerspective( tmp, transformedImage, homography, Size( maxCols, maxRows ) );