How to debug a GLSL shader?

匿名 (未验证) 提交于 2019-12-03 02:49:01

问题:

I need to debug a GLSL program but I don't know how to output intermediate result. Is it possible to make some debug traces (like with printf) with GLSL ?

回答1:

You can't easily communicate back to the CPU from within GLSL. Using glslDevil or other tools is your best bet.

A printf would require trying to get back to the CPU from the GPU running the GLSL code. Instead, you can try pushing ahead to the display. Instead of trying to output text, output something visually distinctive to the screen. For example you can paint something a specific color only if you reach the point of your code where you want add a printf. If you need to printf a value you can set the color according to that value.



回答2:

void main(){   float bug=0.0;   vec3 tile=texture2D(colMap, coords.st).xyz;   vec4 col=vec4(tile, 1.0);    if(something) bug=1.0;    col.x+=bug;    gl_FragColor=col; } 


回答3:

I have found Transform Feedback to be a useful tool for debugging vertex shaders. You can use this to capture the values of VS outputs, and read them back on the CPU side, without having to go through the rasterizer.

Here is another link to a tutorial on Transform Feedback.



回答4:

If you want to visualize the variations of a value across the screen, you can use a heatmap function similar to this (I wrote it in hlsl, but it is easy to adapt to glsl):

float4 HeatMapColor(float value, float minValue, float maxValue) {     #define HEATMAP_COLORS_COUNT 6     float4 colors[HEATMAP_COLORS_COUNT] =     {         float4(0.32, 0.00, 0.32, 1.00),         float4(0.00, 0.00, 1.00, 1.00),         float4(0.00, 1.00, 0.00, 1.00),         float4(1.00, 1.00, 0.00, 1.00),         float4(1.00, 0.60, 0.00, 1.00),         float4(1.00, 0.00, 0.00, 1.00),     };     float ratio=(HEATMAP_COLORS_COUNT-1.0)*saturate((value-minValue)/(maxValue-minValue));     float indexMin=floor(ratio);     float indexMax=min(indexMin+1,HEATMAP_COLORS_COUNT-1);     return lerp(colors[indexMin], colors[indexMax], ratio-indexMin); } 

Then in your pixel shader you just output something like:

return HeatMapColor(myValue, 0.00, 50.00); 

And can get an idea of how it varies across your pixels:

Of course you can use any set of colors you like.



回答5:

GLSL Sandbox has been pretty handy to me for shaders.

Not debugging per se (which has been answered as incapable) but handy to see the changes in output quickly.



回答6:

Do offline rendering to a texture and evaluate the texture's data. You can find related code by googling for "render to texture" opengl Then use glReadPixels to read the output into an array and perform assertions on it (since looking through such a huge array in the debugger is usually not really useful).

Also you might want to disable clamping to output values that are not between 0 and 1, which is only supported for floating point textures.

I personally was bothered by the problem of properly debugging shaders for a while. There does not seem to be a good way - If anyone finds a good (and not outdated/deprecated) debugger, please let me know.



回答7:

I am sharing a fragment shader example, how i actually debug.

#version 410 core  uniform sampler2D samp; in VS_OUT {     vec4 color;     vec2 texcoord; } fs_in;  out vec4 color;  void main(void) {     vec4 sampColor;     if( texture2D(samp, fs_in.texcoord).x > 0.8f)  //Check if Color contains red         sampColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);  //If yes, set it to white     else         sampColor = texture2D(samp, fs_in.texcoord); //else sample from original     color = sampColor;  } 



回答8:

The existing answers are all good stuff, but I wanted to share one more little gem that has been valuable in debugging tricky precision issues in a GLSL shader. With very large int numbers represented as a floating point, one needs to take care to use floor(n) and floor(n + 0.5) properly to implement round() to an exact int. It is then possible to render a float value that is an exact int by the following logic to pack the byte components into R, G, and B output values.

  // Break components out of 24 bit float with rounded int value   // scaledWOB = (offset >> 8) & 0xFFFF   float scaledWOB = floor(offset / 256.0);   // c2 = (scaledWOB >> 8) & 0xFF   float c2 = floor(scaledWOB / 256.0);   // c0 = offset - (scaledWOB 


回答9:

At the bottom of this answer is an example of GLSL code which allows to output the full float value as color, encoding IEEE 754 binary32. I use it like follows (this snippet gives out yy component of modelview matrix):

vec4 xAsColor=toColor(gl_ModelViewMatrix[1][1]); if(bool(1)) // put 0 here to get lowest byte instead of three highest     gl_FrontColor=vec4(xAsColor.rgb,1); else     gl_FrontColor=vec4(xAsColor.a,0,0,1); 

After you get this on screen, you can just take any color picker, format the color as HTML (appending 00 to the rgb value if you don't need higher precision, and doing a second pass to get the lower byte if you do), and you get the hexadecimal representation of the float as IEEE 754 binary32.

Here's the actual implementation of toColor():

#version 120  const int emax=127; // Input: x>=0 // Output: base 2 exponent of x if (x!=0 && !isnan(x) && !isinf(x)) //         -emax if x==0 //         emax+1 otherwise int floorLog2(float x) {     if(x==0) return -emax;     // NOTE: there exist values of x, for which floor(log2(x)) will give wrong     // (off by one) result as compared to the one calculated with infinite precision.     // Thus we do it in a brute-force way.     for(int e=emax;e>=1-emax;--e)         if(x>=exp2(float(e))) return e;     // If we are here, x must be infinity or NaN     return emax+1; }  // Input: any x // Output: IEEE 754 biased exponent with bias=emax int biasedExp(float x) { return emax+floorLog2(abs(x)); }  // Input: any x such that (!isnan(x) && !isinf(x)) // Output: significand AKA mantissa of x if !isnan(x) && !isinf(x) //         undefined otherwise float significand(float x) {     // converting int to float so that exp2(genType) gets correctly-typed value     float expo=floorLog2(abs(x));     return abs(x)/exp2(expo); }  // Input: x\in[0,1) //        N>=0 // Output: Nth byte as counted from the highest byte in the fraction int part(float x,int N) {     // All comments about exactness here assume that underflow and overflow don't occur     const int byteShift=256;     // Multiplication is exact since it's just an increase of exponent by 8     for(int n=0;n=128) binary32.y-=128;     // put lowest bit of exponent into its position, replacing just cleared integer bit     binary32.y+=128*int(mod(e,2));     // prepare high bits of exponent for fitting into their positions     e/=2;     // pack highest byte     binary32.x=e+s;      return binary32; }  vec4 toColor(float x) {     ivec4 binary32=packIEEE754binary32(x);     // Transform color components to [0,1] range.     // Division is inexact, but works reliably for all integers from 0 to 255 if     // the transformation to TrueColor by GPU uses rounding to nearest or upwards.     // The result will be multiplied by 255 back when transformed     // to TrueColor subpixel value by OpenGL.     return binary32/255.; } 


标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!