Using OpenCV solvePnP for Augmented Reality in OpenGL

情到浓时终转凉″ 提交于 2019-12-10 21:42:42

问题


I'm trying to build an Augmented Reality application in Android using BoofCV (OpenCV alternative for Java) and OpenGL ES 2.0. I have a marker which I can get the image points of and "world to cam" transformation using BoofCV's solvePnP function. I want to be able to draw the marker in 3D using OpenGL. Here's what I have so far:

On every frame of the camera, I call solvePnP

Se3_F64 worldToCam = MathUtils.worldToCam(__qrWorldPoints, imagePoints);
mGLAssetSurfaceView.setWorldToCam(worldToCam);

This is what I have defined as the world points

static float qrSideLength = 79.365f; // mm

private static final double[][] __qrWorldPoints = {
        {qrSideLength * -0.5, qrSideLength * 0.5, 0},
        {qrSideLength * -0.5, qrSideLength * -0.5, 0},
        {qrSideLength * 0.5, qrSideLength * -0.5, 0},
        {qrSideLength * 0.5, qrSideLength * 0.5, 0}
};

I'm feeding it a square that has origin at its center, with a sidelength in millimeters.

I can confirm that the rotation vector and translation vector I'm getting back from solvePnP are reasonable, so I don't know if there's a problem here.

I pass the result from solvePnP into my renderer

public void setWorldToCam(Se3_F64 worldToCam) {

    DenseMatrix64F _R = worldToCam.R;
    Vector3D_F64 _T = worldToCam.T;

    // Concatenating the the rotation and translation vector into
    // a View matrix
    double[][] __view = {
        {_R.get(0, 0), _R.get(0, 1), _R.get(0, 2), _T.getX()},
        {_R.get(1, 0), _R.get(1, 1), _R.get(1, 2), _T.getY()},
        {_R.get(2, 0), _R.get(2, 1), _R.get(2, 2), _T.getZ()},
            {0, 0, 0, 1}
    };

    DenseMatrix64F _view = new DenseMatrix64F(__view);

    // Matrix to convert from BoofCV (OpenCV) coordinate system to OpenGL coordinate system
    double[][] __cv_to_gl = {
            {1, 0, 0, 0},
            {0, -1, 0, 0},
            {0, -1, 0, 0},
            {0, 0, 0, 1}
    };

    DenseMatrix64F _cv_to_gl = new DenseMatrix64F(__cv_to_gl);

    // Multiply the View Matrix by the BoofCV to OpenGL matrix to apply the coordinate transform
    DenseMatrix64F view = new SimpleMatrix(__view).mult(new SimpleMatrix(__cv_to_gl)).getMatrix();

    // BoofCV stores matrices in row major order, but OpenGL likes column major order
    // I transpose the view matrix and get a flattened list of 16,
    // Then I convert them to floating point
    double[] viewd = new SimpleMatrix(view).transpose().getMatrix().getData();

    for (int i = 0; i < mViewMatrix.length; i++) {
        mViewMatrix[i] = (float) viewd[i];
    }
}

I'm also using the camera intrinsics I get from camera calibration to feed into the projection matrix of OpenGL

@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {

    // this projection matrix is applied to object coordinates
    // in the onDrawFrame() method

    double fx = MathUtils.fx;
    double fy = MathUtils.fy;
    float fovy = (float) (2 * Math.atan(0.5 * height / fy) * 180 / Math.PI);
    float aspect = (float) ((width * fy) / (height * fx));

    // be careful with this, it could explain why you don't see certain objects
    float near = 0.1f;
    float far = 100.0f;

    Matrix.perspectiveM(mProjectionMatrix, 0, fovy, aspect, near, far);

    GLES20.glViewport(0, 0, width, height);

}

The square I'm drawing is the one defined in this Google example.

@Override
public void onDrawFrame(GL10 gl) {

    // redraw background color
    GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

    // Set the camera position (View matrix)
    // Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);


    // Combine the rotation matrix with the projection and camera view
    // Note that the mMVPMatrix factor *must be the first* in order
    // for matrix multiplication product to be correct

    // Calculate the projection and view transformation
    Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0);

    // Draw shape
    mSquare.draw(mMVPMatrix);
}

I believe the problem has to do with the fact that this definition of a square in Google's example code doesn't take the real world side length into account. I understand that the OpenGL coordinate system has the corners (-1, 1), (-1, -1), (-1, 1), (1, 1) which doesn't correspond to the millimeter object points I have defined for use in BoofCV, even though they are in the right order.

static float squareCoords[] = {
        -0.5f,  0.5f, 0.0f,   // top left
        -0.5f, -0.5f, 0.0f,   // bottom left
        0.5f, -0.5f, 0.0f,   // bottom right
        0.5f,  0.5f, 0.0f }; // top right

来源:https://stackoverflow.com/questions/44313684/using-opencv-solvepnp-for-augmented-reality-in-opengl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!