Passing uint attribute to GLSL

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-06 07:32:54

问题


I'm trying to pass a bunch of consecutive unsigned ints as attribute to my GLSL shader.

So far I came up with

s_number = glGetAttribLocation(shader, "number");

numberData = new GLuint[dotAmount];
for (GLuint i = 0; i < dotAmount; i++) {
    numberData[i] = i;
}

glGenBuffers(1, &vertBuf);
glBindBuffer(GL_ARRAY_BUFFER, vertBuf);

glBufferData(
        GL_ARRAY_BUFFER,
        sizeof(dotAmount),
        numberData,
        GL_STATIC_DRAW
);

The rendering function is

glUseProgram(shader);

[..]

glEnableVertexAttribArray(s_number);
glBindBuffer(GL_ARRAY_BUFFER, vertBuf);

glVertexAttribPointer(
        s_number,
        1,
        GL_UNSIGNED_INT,
        GL_FALSE,
        0,
        BUFFER_OFFSET(0)
);

glDrawArrays(GL_POINTS, 0, dotAmount);

I try to use the number in the vertex shader like this:

attribute uint number;

(The name 'vertBuf' is actually a bit misleading since it's not vertex data I want to pass) I'm using OpenGL 3 and shader versions 1.3.

What I am trying to achieve is, I want the shaders to be executed dotAmount times. The positioning is done mathematically within the shader. But all I get is a blank screen...

I am quite sure that the problem does not lie in the shaders. I want to draw points, and if I put gl_Position = vec4(0.0, 0.0, 0.0, 0.0); in the vertex shader, I assume it should draw something.


回答1:


Try changing the fragment shader to gl_FragColor = vec4(1,0,0,1); This will make sure that the output color makes the fragment visible.

Also, You should have gl_Position = vec4(0, 0, 0, 1); (The reason is that gl_Position must be in homogenous coordinates, meaning that the first three components will be divided by the fourth.)




回答2:


You are using the wrong API call to specify your vertex attribute pointer.

glVertexAttribPointer (...) is for floating-point vertex attributes. It will happily take the value of an integer data type, but ultimately this value will be converted to floating-point. This is why it has a parameter to control floating-point normalization. When normalization is enabled, an integer value you pass is adjusted using the type's range to make it fit within the normalized floating-point range: [-1.0, 1.0] (signed) or [0.0, 1.0] (unsigned); when disabled, an integer is effectively treated as if it were cast to a GLfloat.

In your case, you want neither behavior described above. In your vertex shader, your vertex attribute is not a floating-point type to begin with, so having OpenGL convert your vertex array data to floating-point will produce meaningless results.

What you need to do is use glVertexAttribIPointer (...). Notice how this function lacks the boolean for normalization? It will pass your integer vertex data completely unaltered to your vertex shader, exactly what you want.


In summary:

  1. glVertexAttribPointer (...) is good for supplying data to floating-point vertex attributes (i.e. vec<N>, mat4, float), and will do data-type conversion for you.

  2. glVertexAttribIPointer (...) is specifically designed for integer attributes (i.e. ivec<N>, {u}int).



来源:https://stackoverflow.com/questions/18919927/passing-uint-attribute-to-glsl

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!