Camera lens distortion in OpenGL

孤者浪人 提交于 2020-06-26 05:52:23

问题


I'm trying to simulate lens distortion effect for my SLAM project. A scanned color 3D point cloud is already given and loaded in OpenGL. What I'm trying to do is render 2D scene at a given pose and do some visual odometry between the real image from a fisheye camera and the rendered image. As the camera has severe lens distortion, it should be considered in the rendering stage too.

The problem is that I have no idea where to put the lens distortion. Shaders?

I've found some open codes that put the distortion in the geometry shader. But this one I guess the distortion model is different from the lens distortion model in Computer Vision community. In CV community, lens distortion usually occurs on the projected plane.

This one is quite similar to my work but they didn't used distortion model.

Anyone have a good idea?

I just found another implementation. Their code implemented the distortion in both of fragment shader and geometry shader. But fragment shader version can be applied in my situation. Thus, I guess the following will work:

# vertex shader
p'=T.model x T.view x p
p_f = FisheyeProjection(p') // custom fish eye projection

回答1:


Lens distortion usually turns straight lines into curves. When rasterizing lines and triangles using OpenGL, the primitives' edges however stay straight, no matter how you transform the vertices.

If your models have fine enough tesselation, then incorporating the distortion into the vertex transformation is viable. It also works if you're rendering only points.

However when your aim is general applicability you have to somehow deal with the straight edged primitives. One way is by using a geometry shader to further subdivide incoming models; or you can use a tesselation shader.

Another method is rendering into a cubemap and then use a shader to create a lens equivalent for that. I'd actually recommend that for generating fisheye images.

The distortion itself is usually represented by a polynomial of order 3 to 5, mapping undistorted angular distance from the optical center axis to the distorted angular distance.




回答2:


Inspired by the VR community I implemented the distortion via vertex displacement. For high resolutions this is computionally more efficient but requires a mesh with a good vertex density. You might want to apply tessellation before distorting the image.

Here is the code that implements the OpenCV rational distortion model (see https://docs.opencv.org/4.0.1/d9/d0c/group__calib3d.html for the formulas):

#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 normal_in;
layout (location = 2) in vec2 texture_coordinate_in;
uniform mat4 model_matrix;
uniform mat4 view_matrix;
uniform float dist_coeffs[8];
uniform mat4 projection_matrix;
uniform vec3 light_position;
out vec2 texture_coordinate;
out vec3 normal;
out vec3 light_direction;

// distort the real world vertices using the rational model
vec4 distort(vec4 view_pos)
{
  // normalize
  float z = view_pos[2];
  float z_inv = 1 / z;
  float x1 = view_pos[0] * z_inv;
  float y1 = view_pos[1] * z_inv;
  // precalculations
  float x1_2 = x1*x1;
  float y1_2 = y1*y1;
  float x1_y1 = x1*y1;
  float r2 = x1_2 + y1_2;
  float r4 = r2*r2;
  float r6 = r4*r2;
  // rational distortion factor
  float r_dist = (1 + dist_coeffs[0]*r2 +dist_coeffs[1]*r4 + dist_coeffs[4]*r6) 
    / (1 + dist_coeffs[5]*r2 + dist_coeffs[6]*r4 + dist_coeffs[7]*r6);
  // full (rational + tangential) distortion
  float x2 = x1*r_dist + 2*dist_coeffs[2]*x1_y1 + dist_coeffs[3]*(r2 + 2*x1_2);
  float y2 = y1*r_dist + 2*dist_coeffs[3]*x1_y1 + dist_coeffs[2]*(r2 + 2*y1_2);
  // denormalize for projection (which is a linear operation)
  return vec4(x2*z, y2*z, z, view_pos[3]);
}

void main()
{
  vec4 local_pos = vec4(position, 1.0);
  vec4 world_pos  =  model_matrix * local_pos;
  vec4 view_pos = view_matrix * world_pos;
  vec4 dist_pos = distort(view_pos);
  gl_Position = projection_matrix * dist_pos;
  // lighting on world coordinates not distorted ones
  normal = mat3(transpose(inverse(model_matrix))) * normal_in;
  light_direction = normalize(light_position - world_pos.xyz);
  texture_coordinate = texture_coordinate_in;
}

It is important to note, that the distortion are calculated in z-normalized coordinates but are denormalized into view coordinates in the last line of distort. This allows to use a projection matrix like the one from this post: http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/

Edit: For anyone interested to see the code in context, I have published the code in a small library the distortion shader is used in this example.



来源:https://stackoverflow.com/questions/44489686/camera-lens-distortion-in-opengl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!