Difference between format and internalformat

前端 未结 2 1431
伪装坚强ぢ
伪装坚强ぢ 2020-12-13 19:53

I did search and read stuff about this but couldn\'t understand it.

What\'s the difference between a texture internal format and format in a call like

2条回答
  •  臣服心动
    2020-12-13 20:25

    The internal format describes how the texture shall be stored in the GPU. The format describes how the format of your pixel data in client memory (together with the type parameter).

    Note that the internal format does specify both the number of channels (1 to 4) as well as the data type, while for the pixel data in client memory, both are specified via two separate parameters.

    The GL will convert your pixel data to the internal format. If you want efficient texture uploads, you should use matching formats so that there is no conversion needed. But be aware that most GPUs store the texture data in BGRA order, this still is represented by the internal format GL_RBGA - the internal format only describes the number of channels and the data type, the internal layout is totally GPU-specific. However, that means that it is often recommended for maximum performance to use GL_BGRA as the format of your pixel data in client memory.

    Let's assume that data is an array of 32 x 32 pixel values where there are four bytes per each pixel (unsigned char data 0-255) for red, green, blue and alpha. What's the difference between the first GL_RGBA and the second one?

    The first, internalFormat tells the GL that it should store the texture as 4 channel (RGBA) with normalized integer in the preferred precision (8 bit per channel). The second one, format tells the Gl that you are providing 4 channels per pixel in the R,G,B,A order.

    You could for example supply the data as 3-channel RGB data and the GL would automatically extend this to RGBA (with setting A to 1) if the internal format is left at RGBA. You also could supply only the Red channel.

    The other way around, if you use GL_RED as internalFormat, the GL would ignore the GB and A channel in your input data.

    Also note that the data types will be converted. If you provide a pixel RGB with 32 bit float per channel, you could use GL_FLOAT. However, when you still use the GL_RGBA internal format, the GL will convert these to normalized integers with 8 bpit per channel, so the extra precision is lost. If you want the GL to use the floating point precision, you would also have to use a floating point texture format like GL_RGBA32F.

    Why is GL_RGBA_INTEGER invalid in this context?

    the _INTEGER formats are for unnormalized integer textures. There is no automatic conversion for integer textures in the GL. You have to use an integer internal format, AND you have to specify your pixel data with some _INTEGER format, otherwise it will result in an error.

提交回复
热议问题