问题
I've been trying to read float data for a couple of days with glReadPixels
.
My cpp code:
//expanded to whole screen quad via vertex shader
glDrawArrays( GL_TRIANGLES, 0, 3 );
int size = width * height;
GLfloat* pixels = new GLfloat[ size ];
glReadPixels( 0, 0, width, height, GL_RED, GL_FLOAT, pixels );
pixelVector.resize( size );
for ( int i = 0; i < size; i++ ) {
pixelVector[i] = (float) pixels[i];
}
and my shader code:
out float data;
void main()
{
data = 0.02;
}
Strangely I get 0.0196078 as output. But when data is 0.2 everything is fine. And if data is 0.002 it is all 0's. What can possibly cause this?
回答1:
This is caused by you storing a floating-point value in a normalized integer, then reading it back and converting it into a floating-point value.
Unless you're using a framebuffer object, odds are pretty good that your current framebuffer is just what you got from the OpenGL context. Which probably uses GL_RGBA8
as an image format. That's 8-bits per channel, unsigned and normalized, stored as an integer. So the floating point value you write is clamped to the [0, 1] range, then converted into an integer by multiplying by 255 and rounding, and then stored.
When you read it back as a float, the conversion is done in reverse: the integer value is converted into a float, divided by 255, and returned.
0.02 * 255 = 5.1 ~= 5
5 / 255 = 0.0196
So that's what you get back.
If you want to write a floating-point value from your fragment shader and actually get more than 2-3 digits of precision from what you read back, then you need to be rendering to an FBO that contains images with a reasonable image format. Such as floating-point images (GL_R16F
or GL_R32F
, since you're only writing one channel of data).
来源:https://stackoverflow.com/questions/16874510/how-can-i-read-float-data-with-glreadpixels