3D texture size affecting program output without error being thrown

别来无恙 提交于 2019-12-11 02:34:27

问题


First, I am using the glDebugMessage() instead of glGetError() to determine errors.

Second, I am allocating a 3D texture storage as follows:

glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA32F, 512, 512, 303, 0, GL_RGBA, GL_FLOAT, NULL);

When the depth coponent is 303 or less, my program works exactly as expected (I allocate a color in the texture and I see that color as output), when that parameter is 304 or higher, the program doesn't work (the screen is black).

I have tested the same program in different machines, and depending on the computer the treshhold changes, sometimes it's higher, sometimes lower.

My hypothesis is then that some drivers cannot allocate enough memory to handle this texture. But no error is being thrown, or at least, the debug message is not getting called.

Is there a way I can verify that I am indeed requesting more memory than can be allocated and extend said memory somehow?


回答1:


Yep the 303 texture is ~1.2GByte if I compute it correctly. Your gfx card however also need memory for framebuffer and other stuff too. Sadly there is no common GL API to query available memory. There are extensions for this however. This is how I query basic info in my OpenGL engine (VCL/C++):

AnsiString OpenGLscreen::glinfo()
    {
    AnsiString txt="";
    txt+=glGetAnsiString(GL_VENDOR)+"\r\n";
    txt+=glGetAnsiString(GL_RENDERER)+"\r\n";
    txt+="OpenGL ver: "+glGetAnsiString(GL_VERSION)+"\r\n";
    if (ext_is_supported("GL_NVX_gpu_memory_info"))
        {
        GLint x,y,z;
        x=0; glGetIntegerv(GL_GPU_MEMORY_INFO_CURRENT_AVAILABLE_VIDMEM_NVX,&x);
        y=0; glGetIntegerv(GL_GPU_MEMORY_INFO_DEDICATED_VIDMEM_NVX,&y);
        z=0; glGetIntegerv(GL_GPU_MEMORY_INFO_TOTAL_AVAILABLE_MEMORY_NVX,&z);
        txt+=AnsiString().sprintf("GPU memory: %i/%i/%i MByte\r\n",x>>10,y>>10,z>>10); // GPU free/GPU total/GPU+CPU shared total
        x=0; glGetIntegerv(GL_GPU_MEMORY_INFO_EVICTION_COUNT_NVX,&x);
        y=0; glGetIntegerv(GL_GPU_MEMORY_INFO_EVICTED_MEMORY_NVX,&y);
        txt+=AnsiString().sprintf("GPU blocks: %i used: %i MByte\r\n",x,y>>10);
        }
    if (ext_is_supported("GL_ATI_meminfo"))
        {
        GLint x0,x1,x2,x3; // free,largest available block,free auxiliary, largest available auxiliary
        x0=0; glGetIntegerv(GL_VBO_FREE_MEMORY_ATI,&x0);
        x1=0; glGetIntegerv(GL_VBO_FREE_MEMORY_ATI,&x1);
        x2=0; glGetIntegerv(GL_VBO_FREE_MEMORY_ATI,&x2);
        x3=0; glGetIntegerv(GL_VBO_FREE_MEMORY_ATI,&x3);
        txt+=AnsiString().sprintf("VBO memory: %i MByte\r\n",x0>>10);
        x0=0; glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI,&x0);
        x1=0; glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI,&x1);
        x2=0; glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI,&x2);
        x3=0; glGetIntegerv(GL_TEXTURE_FREE_MEMORY_ATI,&x3);
        txt+=AnsiString().sprintf("TXR memory: %i MByte\r\n",x0>>10);
        x0=0; glGetIntegerv(GL_RENDERBUFFER_FREE_MEMORY_ATI,&x0);
        x1=0; glGetIntegerv(GL_RENDERBUFFER_FREE_MEMORY_ATI,&x1);
        x2=0; glGetIntegerv(GL_RENDERBUFFER_FREE_MEMORY_ATI,&x2);
        x3=0; glGetIntegerv(GL_RENDERBUFFER_FREE_MEMORY_ATI,&x3);
        txt+=AnsiString().sprintf("BUF memory: %i MByte\r\n",x0>>10);
        }
    return txt;
    }

So extract the nVidia and ATI/AMD memory queries. and port to your environment.



来源:https://stackoverflow.com/questions/50375686/3d-texture-size-affecting-program-output-without-error-being-thrown

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!