Is Opengl Development GPU Dependant?

后端 未结 5 2060
执念已碎
执念已碎 2020-12-09 23:12

I am developing an android application in opengl ES2.0.In this Application I used to draw multiple lines and circles by touch event in GL surfaceView.

As opengl depen

相关标签:
5条回答
  • 2020-12-09 23:42

    Crossposted from my answer to a similar question Why my opengl output differs for various devices?:

    Should we take into account of GPU while Coding ? No way, The OpenGL API is a layer between your application and the hardware.

    This is largely correct for desktop graphics as all GPUs are immediate renderers, however, this is NOT the case in mobile graphics.

    The Mali GPUs use tile-based immediate-mode rendering. For this type of rendering, the framebuffer is divided into tiles of size 16 by 16 pixels. The Polygon List Builder (PLB) organizes input data from the application into polygon lists. There is a polygon list for each tile. When a primitive covers part of a tile, an entry, called a polygon list command, is added to the polygon list for the tile. The pixel processor takes the polygon list for one tile and computes values for all pixels in that tile before starting work on the next tile. Because this tile-based approach uses a fast, on-chip tile buffer, the GPU only writes the tile buffer contents to the framebuffer in main memory at the end of each tile. Non-tiled-based, immediate-mode renderers generally require many more framebuffer accesses. The tile-based method therefore consumes less memory bandwidth, and supports operations such as depth testing, blending and anti-aliasing efficiently.

    Another difference is the treatment of rendered buffers. Immediate renderers will "save" the content of your buffer, effectively allowing you to only draw differences in the rendered scene on top of what previously existed. This IS available in Mali, however, is not enabled by default as it can cause undesired effects if used incorrectly.

    There is a Mali GLES2 SDK example on how to use "EGL Preserve" Correctly available in the GLES2 SDK here

    The reason the Geforce ULP based nexus 7 works as intended is that, as an immediate based renderer, it defaults to preserving the buffers, whereas Mali does not.

    From the Khronos EGL specification:

    EGL_SWAP_BEHAVIOR

    Specifies the effect on the color buffer of posting a surface with eglSwapBuffers. A value of EGL_BUFFER_PRESERVED indicates that color buffer contents are unaffected, while EGL_BUFFER_DESTROYED indicates that color buffer contents may be destroyed or changed by the operation.

    The initial value of EGL_SWAP_BEHAVIOR is chosen by the implementation.

    The default value for EGL_SWAP_BEHAVIOUR on the Mali platform is EGL_BUFFER_DESTROYED. This is due to the performance hit associated with having to fetch the previous buffer from memory before rendering the new frame, and storing it at the end as well as the consumption of bandwidth (which is also incredibly bad for battery life on mobile devices). I am unable to comment with certainty as to the default behavior of the Tegra SoCs however, it is apparent to me that their default is EGL_BUFFER_PRESERVED.

    To clarify Mali's position with regards to the Khronos GLES specifications - Mali is fully compliant.

    0 讨论(0)
  • 2020-12-09 23:44

    OpenGL is just a standard. The actual implementation of the API is up to the graphics card manufacturer. So yes, OpenGL development can be GPU dependant sometimes. However, all implementations should provide the same result (what happens behind the scenes can be really different). If your code gives a different result with different GPUs, there is probably a version difference in the OpenGL implementation.

    You can use these functions to get the supported OpenGL version:

    glGetIntegerv​(GL_MAJOR_VERSION​, *); //version 3.0+
    glGetIntegerv​(GL_MINOR_VERSION​, *); //version 3.0+
    glGetString​(GL_VERSION​); //all versions
    
    0 讨论(0)
  • 2020-12-09 23:54
    1. Why don´t you provide one working example, so people actually could help?

    2. From your code: I can´t see where do you create your line? Something like:

      @Override public void onSurfaceCreated(GL10 gl, EGLConfig config){
          ...
          mLine = new Lines();
          ...
      }
      
    3. As others already mentioned, in onDrawFrame always clear the buffer:

      public void onDrawFrame(GL10 gl )
      {
          // Erase CL_COLOR_BUFFER
          GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
      
    4. Set the camera:

      // Set the camera position (View matrix)
      Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
      //
      // Calculate the projection and view transformation
      Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mViewMatrix, 0);
      
    5. Draw:

      line.draw( dx, dy, ux, uy );
      
    0 讨论(0)
  • 2020-12-09 23:55

    Okay, here it goes again: ^1

    OpenGL is not a scene graph. OpenGL does not maintain a scene, knows about objects or keeps tracks of geometry. OpenGL is a drawing API. You give it a canvas (in form of a Window or a PBuffer) and order it to draw points, lines or triangles and OpenGL does exactly that. Once a primitive (=point, line, triangle) has been drawn, OpenGL has no recollection about it whatsoever. If something changes, you have to redraw the whole thing.

    The proper steps to redraw a scene are:

    1. Disable the stencil test, so that the following step operates on the whole window.

    2. Clear the framebuffer using glClear(bits), where bits is a bitmask specifying which parts of the canvas to clear. When rendering a new frame you want to clear everything so bits = GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT;

    3. set the viewport, build an apropriate projection matrix

    4. for each object in the scene load the right modelview matrix, set uniforms, select the vertex arrays and make the drawing call.

    5. finish the rendering by flushing the pipeline. If using a single buffered window glFinish(), if using a double buffered window call SwapBuffers. In case of higher level frameworks this may be performed by the framework.

    Important Once the drawing has been finished on a double buffered window, you must not continue to send drawing operations, as by performing the buffer swap the contents of the back buffer you're drawing to are undefined. Hence you must start the drawing anew, beginning with clearing the framebuffer (steps 1 and 2).

    What your code misses are exactly those two steps. Also I have the impression that you're performing OpenGL drawing calls in direct reaction to input events, possibly in the input event handlers themself. Don't do this!. Instead use the input events to add to a list of primitives (lines in your case) to draw, then send a redraw event, which makes the framework call the drawing function. In the drawing function iterate over that list to draw the desired lines.

    Redrawing the whole scene is canonical in OpenGL!


    [1] (geesh, I'm getting tired of having to write this every 3rd question or so…)

    0 讨论(0)
  • 2020-12-09 23:57

    Taking a punt here, but are you ever actually clearing the screen? The kinds of behaviour you are seeing suggest that you are not, and that in different scenaries you are seeing different errors - uninitialised memory, reusing an old buffer, implicitly clearing, etc.

    GL requires you to be specific about what you want, so you need to explicitly clear.

    0 讨论(0)
提交回复
热议问题