Get old style OpenGL code work in GLSL

℡╲_俬逩灬. 提交于 2020-01-05 16:48:23

问题


I am trying to draw this pattern in OpenGL :

To get this, I created the pattern like :

vector< vector<DataPoint> > datas;
float Intensitytemp=0;
float xPos=0, yPos=0, angleInRadians=0;
for (float theta = 0.0f; theta < 4096; theta += 1.f)
{
    vector<DataPoint> temp;
    angleInRadians = 2 * M_PI*theta / 4096;
    for (float r = 0; r < 4096; r += 1.f)
    {
        xPos = cos(angleInRadians)*r / 4096;
        yPos = sin(angleInRadians)*r / 4096;
        Intensitytemp = ((float)((int)r % 256)) / 255;
        DataPoint dt;
        dt.x = xPos;
        dt.y = yPos;
        dt.Int = Intensitytemp;
        temp.push_back(dt);
    }
    datas.push_back(temp);
}

and I am drawing the pattern as :

glBegin(GL_POINTS);
    for (int x = 0; x < 4096; x++)
        for (int y = 0; y < 4096; y++)
        {
            xPos = datas[x][y].x;
            yPos = datas[x][y].y;
            Intensitytemp = datas[x][y].Int;
            glColor4f(0.0f, Intensitytemp, 0.0f, 1.0f);
            glVertex3f(xPos, yPos, 0.0f);
        }
glEnd();

If I create the data in glBegin()-glEnd() block,it is working faster. But in both cases,I believe that a better way is doing all in GLSL.I didn't understand the logic behind the modern OpenGL well.

I tried to create vertex buffer array and color arrays but could not get it work. The problem was not about transferring the arrays to graphics card.I am getting stackoverflows in arrays. This is question of another topic but here what I wonder is it possible to do this task in completely GLSL code ( those in .vert file ) without transferring these huge array to graphics card.


回答1:


  1. render quad covering the screen

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    GLint id;
    glUseProgram(prog_id);
    
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();
    glMatrixMode(GL_TEXTURE);
    glLoadIdentity();
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
    
    glDisable(GL_DEPTH_TEST);
    glDisable(GL_TEXTURE_2D);
    
    glBegin(GL_QUADS);
    glColor3f(1,1,1);
    glVertex2f(-1.0,-1.0);
    glVertex2f(-1.0,+1.0);
    glVertex2f(+1.0,+1.0);
    glVertex2f(+1.0,-1.0);
    glEnd();
    
    glUseProgram(0);
    glFlush();
    SwapBuffers(hdc);
    

    Also see See complete GL+GLSL+VAO/VBO C++ example on how to get GLSL working (even the new stuff)

    Do not forget to set your GL view to square area!

  2. in vertex shader pass the vertex coordinates to fragment

    no need for matrices... the pos is in range <-1.0,1.0> which is fine for fragment.

    // Vertex
    varying vec2 pos;
    void main()
        {
        pos=gl_Vertex.xy;
        gl_Position=gl_Vertex;
        }
    
  3. in fragment compute distance from the middle (0,0) and compute final color from it

    // Fragment
    varying vec2 pos;
    void main()
        {
        vec4 c=vec4(0.0,0.0,0.0,1.0);
        float r=length(pos);    // radius = distance to (0,0)
        if (r<=1.0)             // inside disc?
            {
            r=16.0*r;           // your range 16=4096/256
            c.g=r-floor(r);     // use only the fractional part ... %256
            }
        gl_FragColor=c;
        }
    

    Here the result:

  4. How GLSL works

    You can handle the fragment shader as a color computation engine for polygon filling. It works like this:

    The GL primitive is passed by GL calls to vertex shader which is responsible for transformations and pre-computing of constants. Vertex shader is called for each glVertex call from oldstyle GL.

    If supported primitive (set by glBegin in oldstyle GL) is fully passed (like TRIANGLE,QUAD,...) the gfx card start rasterization. This is done by HW interpolators calling fragment shader for each "pixel" to fill. As the "pixel" contains much more data then just color and also can be discarted ... it is called fragment instead. Its sole purpose is to compute target color of the pixel on the screen it represents. You can not change its position only the color. That is the biggest difference between old GL and GLSL approach. You can not change the shape or position of objects only how are they colored/shaded hence the name shaders. So if you need to generate specific pattern or effect you usually render some primitive covering the area involved by GL and recolor it by computation inside mostly fragment shader.

    Obviously the Vertex shader is not called as often as Fragment shader in most cases so move as much of the computations as you can to the Vertex shader to improve performance.

    Newer GLSL versions support also geometry and tesselation shaders but that is a chapter on its own and not important for you now. (you need to get used to Vertex/Fragment first).

[Notes]

Single if in such simple shader is not a big problem. The main speed increase is just in that you pass single quad instead of 4096x4096 points. The Shader code is fully parallelized by the gfx HW directly. That is why the architecture is how is ... limiting some capabilities of what can be done efficiently inside shader in comparison to standard CPU/MEM architectures.

[Edit1]

You can often avoid the if by clever math tricks like this:

// Fragment
varying vec2 pos;
void main()
    {
    vec4 c=vec4(0.0,0.0,0.0,1.0);
    float r=length(pos);            // radius = distance to (0,0)
    r*=max(1.0+floor(1.0-r),0.0);   // if (r>1.0) r=0.0;
    r*=16.0;                        // your range 16=4096/256
    c.g=r-floor(r);                 // use only the fractional part ... %256
    gl_FragColor=c;
    }



回答2:


To directly answer your question

No, that's now how shaders work. Shaders redefine parts of the rendering pipeline. In ancient OpenGL, where the pipeline is fixed, the GPU uses built-in shader routines to render primitives you upload to it via the glBegin/glEnd related calls. With later OpenGL versions, however, you can write custom routines for your GPU to use. In both cases, you need to send data for the shaders to work with.

To let you into a better approach on how to do it

First, a vertex shader feeds on vertices data. It takes vertices and operates on them one at a time applying various transformations (by multiplying the vertex by the model-view-projection matrices). Once it has done this for every vertex, the area made by connecting the vertices gets broken down into coordinate values (among other things you can pass from the vertex shader to the fragment shader) in a process called rasterization. These coordinates, each representing the location of a pixel on the screen, are then sent over to the fragment shader which operates on them to set the color and apply any lighting calculation.

Now, since what you're trying to draw is a pattern which has a formula behind it, you can color a 4096x4096 square, by sending only 4 vertices quite literally, as you wish to produce the same result.

Vertex Shader:

#version 150

in vec2 vertexPos;
out vec2 interpolatedVertexPos;

void main()
{
  interpolatedVertexPos = vertexPos;
}

Fragment shader:

#version 1.5

in vec2 interpolatedVertexPos;
out vec4 glFragColor;

void main()
{
  const vec2 center = vec2(2048, 2048);
  float distanceFromCenter = sqrt(pow((interpolatedVertexPos.x-center.x), 2) + pow((interpolatedVertexPos.y-center.y), 2));
  if (distanceFromCenter > 2048){
    discard;
  }
  float Intensitytemp = ((float)((int)distanceFromCenter % 256)) / 255;
  glFragColor = vec4(0, Intensitytemp , 0, 1);
}

Edit: I figured you may as well find this answer helpful: OpenGL big projects, VAO-s and more



来源:https://stackoverflow.com/questions/35938558/get-old-style-opengl-code-work-in-glsl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!