Transform a frame to be as if it was taken from above using OpenCV

妖精的绣舞 提交于 2019-11-30 07:42:31

If you have a calibration intrinsics matrix A (3x3), and there is no translation between camara poses, all you need to find homography H (3x3) is to construct rotation matrix R (3x3) from euler angles and apply the following formula:

H = A * R * A.inv()

Where .inv() is matrix invertion.

UPDATED:

If you want to center the image, you should just add translation this way: (this is finding the warped position of the center and translation of this point back to the center)

|dx|       | 320 / 2 |
|dy| = H * | 240 / 2 |
|1 |       | 1       |

    | 1 0 (320/2-dx) | 
W = | 0 1 (240/2-dy) | * H
    | 0 0 1          |

W is your final transformation.

I came to a conclusion that I had to use the 4x4 Homography matrix in order to be able to get what I wanted. In order to find the right homography matrix we need:

  1. 3D Rotation matrix R.
  2. Camera calibration intrinsic matrix A1 and its inverted matrix A2.
  3. Translation matrix T.

We can compose the 3D rotation matrix R by multiplying the rotation matrices around axes X,Y,Z:

Mat R = RZ * RY * RX  

In order to apply the transformation on the image and keep it centered we need to add translation given by a 4x4 matrix, where dx=0; dy=0; dz=1 :

Mat T = (Mat_<double>(4, 4) <<
         1, 0, 0, dx,
         0, 1, 0, dy,
         0, 0, 1, dz,
         0, 0, 0, 1);

Given all these matrices we can compose our homography matrix H:

Mat H = A2 * (T * (R * A1))

With this homography matrix we can then use warpPerspective function from OpenCV to apply the transformation.

warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);

For conclusion and completeness of this solution here is the full code:

void rotateImage(const Mat &input, UMat &output, double roll, double pitch, double yaw,
                 double dx, double dy, double dz, double f, double cx, double cy)
  {
    // Camera Calibration Intrinsics Matrix
    Mat A2 = (Mat_<double>(3,4) <<
              f, 0, cx, 0,
              0, f, cy, 0,
              0, 0, 1,  0);
    // Inverted Camera Calibration Intrinsics Matrix
    Mat A1 = (Mat_<double>(4,3) <<
              1/f, 0,   -cx/f,
              0,   1/f, -cy/f,
              0,   0,   0,
              0,   0,   1);
    // Rotation matrices around the X, Y, and Z axis
    Mat RX = (Mat_<double>(4, 4) <<
              1, 0,         0,          0,
              0, cos(roll), -sin(roll), 0,
              0, sin(roll), cos(roll),  0,
              0, 0,         0,          1);
    Mat RY = (Mat_<double>(4, 4) <<
              cos(pitch),  0, sin(pitch), 0,
              0,           1, 0,          0,
              -sin(pitch), 0, cos(pitch), 0,
              0,           0, 0,          1);
    Mat RZ = (Mat_<double>(4, 4) <<
              cos(yaw), -sin(yaw), 0, 0,
              sin(yaw),  cos(yaw), 0, 0,
              0,          0,       1, 0,
              0,          0,       0, 1);
    // Translation matrix
    Mat T = (Mat_<double>(4, 4) <<
             1, 0, 0, dx,
             0, 1, 0, dy,
             0, 0, 1, dz,
             0, 0, 0, 1);
    // Compose rotation matrix with (RX, RY, RZ)
    Mat R = RZ * RY * RX;
    // Final transformation matrix
    Mat H = A2 * (T * (R * A1));
    // Apply matrix transformation
    warpPerspective(input, output, H, input.size(), INTER_LANCZOS4);
}

Result:

This how I do it in Eigen and using 4 corners:

// Desired four corners
std::vector<Eigen::Vector2f> Normalized_Reference_Pattern = { Eigen::Vector2f(0, 0), Eigen::Vector2f(0, 2), Eigen::Vector2f(2, 0), Eigen::Vector2f(2, 2) }; 

// Current four points
std::vector<Eigen::Vector2f> CurrentCentroids = { /* Whatever four corners you want, but relative sueqnece to above */ };

// Transform for current to desired
auto Master_Transform = get_perspective_transform(CurrentCentroids, Normalized_Reference_Pattern);

// abilitu to use the same tranformation for other points (other than the corners) in the image
Eigen::Vector2f Master_Transform_Centroid = Master_Transform * Eigen::Vector2f(currentX, currentY);

And here is my black box:

Eigen::Matrix3f get_perspective_transform(const std::vector<Eigen::Vector2f>& points_from,const std::vector<Eigen::Vector2f>& points_to)
{
    cv::Mat transform_cv = cv::getPerspectiveTransform(
        convert::to_cv(points_from), 
        convert::to_cv(points_to));

    Eigen::Matrix3f transform_eigen;
    cv::cv2eigen(transform_cv, transform_eigen);
    return transform_eigen;
}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!