Portably and Correctly Getting Maximum 1D/2D/3D/Cube Texture Resolution in OpenGL

a 夏天 提交于 2019-12-12 02:50:27

问题


I am interested in getting the maximum hardware-supported resolution for textures.

There are, as far as I have found, two mechanisms for doing something related to this:

  1. glGetIntegerv(GL_MAX_TEXTURE_SIZE,&dim) for 2D (and cube?) textures has served me well. For 3D textures, I discovered (the hard way) that you need to use GL_MAX_3D_TEXTURE_SIZE instead. As far as I can tell, these return the maximum resolution along one side, with the other sides assumed to be the same.

    It is unclear what these values actually represent. The values returned by glGetIntegerv(...) are to be considered "rough estimate"s, according to the documentation, but it's unclear whether they are conservative underestimates, best guesses, or best-cases. Furthermore, it's unclear whether these are hardware limitations or current limitations based on the amount of available graphics memory.

    The documentation instead suggests using . . .

  2. GL_PROXY_TEXTURE_(1|2|3)D/GL_PROXY_TEXTURE_CUBE_MAP. The idea here is you make a proxy texture before you make your real one. Then, you check to see whether the proxy texture was okay by checking the actual dimensions it got. For 3D textures, that would look like:

    glGetTexLevelParameteriv(GL_PROXY_TEXTURE_3D, 0,  GL_TEXTURE_WIDTH, &width);
    glGetTexLevelParameteriv(GL_PROXY_TEXTURE_3D, 0, GL_TEXTURE_HEIGHT, &height);
    glGetTexLevelParameteriv(GL_PROXY_TEXTURE_3D, 0,  GL_TEXTURE_DEPTH, &depth);
    

    If all goes well, then the dimensions returned will be nonzero (and presumably the same as what you requested). Then you delete the proxy and make the texture for real.

    Some older sources state that proxy textures give outright wrong answers, but that may not be true today.

So, for modern OpenGL (GL 4.* is fine), what is the best way to get the maximum hardware-supported resolution for 1D-, 2D-, 3D-, and cube-textures?


回答1:


The values returned by glGetIntegerv() for GL_MAX_TEXTURE_SIZE annd GL_MAX_3D_TEXTURE_SIZE are the correct limits for the particular implementation.

It is unclear what these values actually represent. The values returned by glGetIntegerv(...) are to be considered "rough estimate"s, according to the documentation, but it's unclear whether they are conservative underestimates, best guesses, or best-cases.

What kind of documentation are you refering to? The GL spec is very clear on the meaning of those values, and they are not estimates of any kind.

The proxy method should work, too, but does not directly allow you to query the limits. You could of use binary search to narrow down the exact limit via that proxy texture path, but that is just a rather clumsy approach.




回答2:


There is a separate value for cube maps, which is queried with GL_MAX_CUBE_MAP_TEXTURE_SIZE. So the limits are:

  • GL_MAX_TEXTURE_SIZE: Maximum size for GL_TEXTURE_1D and GL_TEXTURE_2D.
  • GL_MAX_RECTANGLE_TEXTURE_SIZE: Maximum size for GL_TEXTURE_RECTANGLE.
  • GL_MAX_CUBE_MAP_TEXTURE_SIZE: Maximum size for GL_TEXTURE_CUBE_MAP.
  • GL_MAX_3D_TEXTURE_SIZE: Maximum size for GL_TEXTURE_3D.

The "rough estimate" language you found on the man pages seems unfortunate. If you look at the much more relevant spec document instead, it talks about the "maximum allowable width and height", or simply says that it's an error to use a size larger than these limits.

These limits represent the maximum sizes supported by the hardware. Or more precisely, the advertised hardware limit. It's of course legal for hardware to restrict the limit below what the hardware could actually support, as long as the advertised limit is consistently applied. Picture that the hardware can only manage/sample textures up to a given size, and this is the size reported by these limits.

These limits have nothing to do with the amount of memory available, so staying within these limits is absolutely no guarantee that a texture of the size can successfully be allocated.

I believe the intention of proxy textures is to let you check what size can actually be allocated. I don't know if that works reliably on any platforms. The mechanism really is not a good fit for how modern GPUs manage memory. But I have never used proxy textures, or dealt with implementing them. I would definitely expect significant platform/vendor dependencies in how exactly they operate. So you should probably try if they give you the desired results on the platforms you care about.



来源:https://stackoverflow.com/questions/26410799/portably-and-correctly-getting-maximum-1d-2d-3d-cube-texture-resolution-in-openg

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!