Converting pixel co-ordinates to normalized co-ordinates at draw time in OpenGL 3.0

不想你离开。 提交于 2021-02-08 10:33:38

问题


I am drawing a triangle in OpenGL like:

            MyGLRenderer( )
            {
                fSampleVertices = ByteBuffer.allocateDirect( fSampleVerticesData.length * 4 )
                        .order ( ByteOrder.nativeOrder( ) ).asFloatBuffer( );

                fSampleVertices.put( fSampleVerticesData ).position ( 0 );

                Log.d( TAG, "MyGLRender( )" );
            }

            private FloatBuffer fSampleVertices;

            private final float[] fSampleVerticesData =
                    { .8f, .8f, 0.0f, -.8f, .8f, 0.0f, -.8f, -.8f, 0.0f };

            public void onDrawFrame( GL10 unused )
            {
                GLES30.glViewport ( 0, 0, mWidth, mHeight );

                GLES30.glClear ( GLES30.GL_COLOR_BUFFER_BIT );

                GLES30.glUseProgram ( dProgramObject1 );

                GLES30.glVertexAttribPointer ( 0, 3, GLES30.GL_FLOAT, false, 0, fSampleVertices );

                GLES30.glEnableVertexAttribArray ( 0 );

                GLES30.glDrawArrays( GLES30.GL_TRIANGLES, 0, 3 );

                //Log.d( TAG, "onDrawFrame( )" );
            }

So since I have experimented with the co-ordinates it doesn't take long to figure out that the visible area of the screen is between -1,1. So then the triangle takes up 80% of the screen. As well I have determined that the pixel dimensions of my GLSurfaceView are 2560 in width and 1600 in height.

So then given a triangle with these pixel based co-ordinates (fBoardOuter):

    1112.0f
    800.0f
    0.0f
    -1280.0f
    800.0f
    0.0f
    -1280.0f
    -800.0f
    0.0f

I have to either convert those pixel co-ordinates to something between -1,1 or find out a way to have gl convert those co-ordinates at the time they are drawn? Since I am very new to OpenGL I am looking for some guidance to do this?

My vertex shader is like:

    String sVertexShader1 =
              "#version 300 es              \n"
            + "in vec4 vPosition;           \n"
            + "void main()                  \n"
            + "{                            \n"
            + "   gl_Position = vPosition;  \n"
            + "}                            \n";

Would I be correct then in saying that a pixels based system would be called world co-ordinates? What I am trying to do right now is just some 2D drawing for a board game.


I've discovered that Android has this function:

    orthoM(float[] m, int mOffset, float left, float right, float bottom, float top, float near, float far)

However there is nothing in the documentation I've read so far that explain the usage of the matrix of how a float[] with pixel co-ordinates can be transformed to normalized co-ordinates with that matrix in GLES30.

I've also found the documentation here:

http://developer.android.com/guide/topics/graphics/opengl.html

Based off the documentation I have tried to create an example:

http://pastebin.com/5PTsfSdz

In the pastebin example fSampleVertices I thought would be much smaller and at the center of the screen but it isn't it's still almost the entire screen and fBoardOuter just shows me a black screen if I try to put it into glDrawArray.


回答1:


You will probably need to find a book or some good tutorials to get a strong grasp on some of these concepts. But since there some specific items in your question, I'll try and explain them as well as I can within this format.

The coordinate system you discovered, where the range is [-1.0, 1.0] in the x- and y coordinate directions, is officially called Normalized Device Coordinates, often abbreviated as NDC. Which is very similar to the name you came up with, so some of the OpenGL terminology is actually very logical. :)

At least as long as you're dealing with 2D coordinates, this is the coordinate range your vertex shader needs to produce. I.e. the coordinates you assign to the built-in gl_Position variable need to be within this range to be visible in the output. Things gets slightly more complicated if you're dealing with 3D coordinates and are applying perspective projections, but we'll skip over that part for now.

Now, as you already guessed, you have two main options if you want to specify your coordinates in a different coordinate system:

  1. You transform them to NDC in your code before you pass them to OpenGL.
  2. You have OpenGL apply transformations to your input coordinates.

Option 2 is clearly the better one, since GPUs are very efficient at performing this job.

On a very simple level, this means that you modify the coordinates in your vertex shader. If you look at your very simple first vertex shader:

in vec4 vPosition;
void main()
{
    gl_Position = vPosition;
}

you get the coordinates provided by your app code in the vPosition input variable, and you assign exactly he same coordinates to the vertex shader output gl_Position.

If you want to use a different coordinate system, you process the input coordinates in the vertex shader code, and assign those processed coordinates to the output instead.

Modern versions of OpenGL don't really have a name for those coordinate systems anymore. There used to be "model coordinates" and "world coordinates" when some of this stuff was still hardwired into a fixed pipeline. Now that this is done with programmable shader code, those concepts are not relevant anymore from the OpenGL point of view. All it cares about are the coordinates that come out of the vertex shader. Everything that happens before that is your own business.

The canonical way of applying linear transformations, which includes the translations and scaling you need for your intended use, is by multiplying the coordinates with a transformation matrix. You already discovered the android.opengl.Matrix package that contains some utility functions for building transformation matrices if you don't want to write the (simple) code yourself.

Once you have a transformation matrix, you pass it into the vertex shader as a uniform variable, and apply the matrix in your shader code. The way this looks in the shader code is for example:

in vec4 vPosition;
uniform mat4 TransformMat;
void main()
{
    gl_Position = TransformMat * vPosition;
}

To set the value of this matrix, you need to get the location of the uniform variable once after linking the shader, with prog your shader program:

GLint transformLoc = GLES20.glGetUniformLocation(prog, "TransformMat");

Then, at least once, and every time you want to change the matrix, you call:

GLES20.glUniformMatrix4fv(transformLoc, 1, GL_FALSE, mat, 0);

where mat is the matrix you either built yourself, or got from one of the utility functions in android.opengl.Matrix. Note that this call needs to be after you make the program current with glUseProgram().



来源:https://stackoverflow.com/questions/31194750/converting-pixel-co-ordinates-to-normalized-co-ordinates-at-draw-time-in-opengl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!