Rotating a pinhole camera in 3D

自作多情 提交于 2019-12-12 03:24:12

问题


I am trying to rotate a pinhole camera in 3D space. I have previously raytraced a room. As good practice I have first done the maths and the tried to program the maths in c++.

// Camera position
vec3 cameraPos(0, 0, -19);


// Rotate camera
float& yaw;
vec3 c1(cos(yaw), 0, sin(yaw));
vec3 c2(0, 1, 0);
vec3 c3(-sin(yaw), 0, cos(yaw));
glm::mat3 R(c1, c2, c3);

What I have done to rotate the camera is this:

if (keystate[SDLK_LEFT])
{
    //cameraPos.x -= translation;
    if (yaw > 0)
    {   
        yaw = 0.01;
    }
    cout << yaw << endl;
    cameraPos = R * cameraPos;
    cout << "LEFT" << endl;
}
if (keystate[SDLK_RIGHT])
{
    //cameraPos.x += translation;
    if (yaw > 0)
    {
        yaw = -0.01;
    }
    cout << yaw << endl;
    cameraPos = R * cameraPos;
    cout << "RIGHT" << endl;
}

I have multiplied the rotation matrix R with the camera position vector. What happens now is that the room moves only to the left no matter what key I press.

The tutorial I am following says:

If the camera is rotated by the matrix R then vectors representing the right (x-axis), down (y-axis) and forward (z-axis) directions van be retrieved as:

vec3 right(R[0][0],R[0][1],R[0][2]);
vec3 down(R[1][0],R[1][1],R[2][2]);
vec3 right(R[2][0],R[2][1],R[2][2]);

To model a rotating camera you need to use these directions both when you move the camera and when you cast rays.

I don't understand how I am supposed to use the above information.

Any help or references appreciated.


回答1:


You don't seem to be updating your R matrix after changing the yaw. This means that every time you do camerapos = R * camerapos you are rotating the camerapos vector in one direction.

More proper way to do this would be to have the camerapos separate, build the R every time and use another vector for the result of the camera position.

Something like this:

// Camera position
vec3 cameraPos(0, 0, -19);
vec3 trueCameraPos;
float yaw;

if (keystate[SDLK_LEFT])
{
    //cameraPos.x -= translation;
    if (yaw > 0)
    {   
        yaw = 0.01;
    }
    cout << yaw << endl;
    cout << "LEFT" << endl;
}
if (keystate[SDLK_RIGHT])
{
    //cameraPos.x += translation;
    if (yaw > 0)
    {
        yaw = -0.01;
    }
    cout << yaw << endl;
    cout << "RIGHT" << endl;
}

// Rotate camera
vec3 c1(cos(yaw), 0, sin(yaw));
vec3 c2(0, 1, 0);
vec3 c3(-sin(yaw), 0, cos(yaw));
glm::mat3 R(c1, c2, c3);

trueCameraPos = R * cameraPos;

As for the camera definitions, the camera needs three vectors to define its orientation. If you rotate the camera the orientation also rotates, otherwise you'd just move the camera and it would always be looking in one direction.

The definition in yellow is incorrect, since there should be three perpendicular vectors, usually up, right and forward. Now there are two right vectors (one being down is just opposite of what up would be), so the last one should be forward vector.

These vectors define the directions used in the raytracer. Forward is where the rays are traced to, up and right define displacement directions in the image plane for each image pixel. You are most likely using these already in your tracing code.



来源:https://stackoverflow.com/questions/36604734/rotating-a-pinhole-camera-in-3d

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!