opengl-es

How to load a texture onto a circle with OpenGL ES

五迷三道 提交于 2020-01-04 02:45:11
问题 I am facing problems on loading a texture onto a circle. My circle is made with a triangle fan. It gives a bad output. Original Image: The Result : My code: public class MyOpenGLCircle { private int points=360; private float vertices[]={0.0f,0.0f,0.0f}; private FloatBuffer vertBuff, textureBuffer; float texData[] = null; float theta = 0; int[] textures = new int[1]; int R=1; float textCoordArray[] = { -R, (float) (R * (Math.sqrt(2) + 1)), -R, -R, (float) (R * (Math.sqrt(2) + 1)), -R }; public

Why can't I use OpenGL ES 3.0 in Qt?

回眸只為那壹抹淺笑 提交于 2020-01-04 02:19:05
问题 I set a QSurfaceFormat on my window, and this surface format has "3.0" set as its GL version number. The code: static QSurfaceFormat createSurfaceFormat() { QSurfaceFormat format; format.setSamples(4); format.setDepthBufferSize(24); format.setStencilBufferSize(8); format.setVersion(3, 0); return format; } int main(int argc, char *argv[]) { // ... QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); QWindow* window = (QWindow*) engine.rootObjects().first(); window-

OpenGL|ES on a desktop PC

扶醉桌前 提交于 2020-01-03 10:47:21
问题 I'm working on a OpenGL project that I would like to port to embedded systems that support OpenGL|ES. Since OpenGL|ES is a subset of OpenGL how hard would it be to compile my OpenGL application on an embedded system? (Assuming that my OpenGL code is in the limits of OpenGL|ES) I guess what I'm wondering is: is it possible to dircetly wrap my OpenGL calls with MACROS to make it compatible with OpenGL|ES API call names? Are there any calls specific to OpenGL|ES that I would have to implement?

Which parts of UIKit, Core Graphics, Core Animation, OpenGL are allowed on non main-thread?

爷,独闯天下 提交于 2020-01-03 09:28:10
问题 In my OpenGL-ES 1.1 based app, I'm using CALayer s as a source for OpenGL textures. Those CALayer s comprise of CGImage s and text rendered through CoreGraphics. Another OpenGL texture source is a screenshot of a UIView taken using -[CALAyer renderInContext:] and UIGraphicsGetImageFromCurrentImageContext . Currently, I'm running completely on the main thread. The latter case in particular is pretty bad because it halts the OpenGL rendering for the whole time it takes to create the UIView and

OpenGL - Should I store Attribute/Uniform locations?

谁说胖子不能爱 提交于 2020-01-03 09:03:32
问题 Are glGetUniformLocation and glGetAttribLocation time consuming? Which way is better? Call glGetAttribLocation or glGetUniformLocation every time I need it ? Store locations in varables and use them when needed ? 回答1: Whether on Android or iPhone, for an opengl surface you will have methods like: onSurfaceCreated and onSurfaceChanged , get into the habit of fetching uniforms and attributes here in these 2 methods. The only way you can make rendering faster (which will soon become your

OpenGL - Should I store Attribute/Uniform locations?

烈酒焚心 提交于 2020-01-03 09:01:15
问题 Are glGetUniformLocation and glGetAttribLocation time consuming? Which way is better? Call glGetAttribLocation or glGetUniformLocation every time I need it ? Store locations in varables and use them when needed ? 回答1: Whether on Android or iPhone, for an opengl surface you will have methods like: onSurfaceCreated and onSurfaceChanged , get into the habit of fetching uniforms and attributes here in these 2 methods. The only way you can make rendering faster (which will soon become your

What's the correct way to draw a distorted plane in OpenGL?

拟墨画扇 提交于 2020-01-03 05:53:53
问题 I've tryed to distort a plane using GL_QUADS, GL_TRIANGLES and GL_POLYGON: glBegin(GL_QUADS); for (int i = 0; i < squareIn.size(); i++) { glNormal3f(0,0,1); glTexCoord2f(squareIn[i]->v1->x,squareIn[i]->v1->y); glVertex2f(squareOut[i]->v1->x,squareOut[i]->v1->y); glTexCoord2f(squareIn[i]->v2->x,squareIn[i]->v2->y); glVertex2f(squareOut[i]->v2->x,squareOut[i]->v2->y); glTexCoord2f(squareIn[i]->v3->x,squareIn[i]->v3->y); glVertex2f(squareOut[i]->v3->x,squareOut[i]->v3->y); glTexCoord2f(squareIn

Replace exactly one pixel in an image and put it in another image via Swift

扶醉桌前 提交于 2020-01-03 04:53:19
问题 Simply put, if I have an image, I and another image J , I want to replace the RGB value at a position I(t,s) and assign that pixel to J(t,s) . How might I do this in Core Image, or using a custom kernel? This seems like it might not be an easy thing to do, considering the way Core Image works. However, I was wondering maybe there was a way to extract the value of the pixel at (t,s) , create an image K as large as J with just that pixel, and then overlay J with K only at that one point. Just

How to remove unused resources from an OpenGL program

二次信任 提交于 2020-01-03 03:35:13
问题 I am trying to create something like effect system for OpenGL, and I want to be able to define a number of shaders in the same file. But I discovered the following problem. Say I have two shaders: A and B. Shader A uses texA and shader B uses texB. Then despite the fact that neither shader A uses texB nor shader B uses texA, both textures will be enumerated in both programs (I am using separate programs, so every shader corresponds to one program). One consequence is that I cannot have many

Is there any alternative for GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)?

谁说我不能喝 提交于 2020-01-03 03:27:27
问题 I get a fill rate of about 30fps in my application. I know that GLES20.glClear() is used to clear the screen for every draw. If i comment it i get a fps of about 60fps. But the output is not as expected. I have a content to be redrawn for the whole screen in every frame. Is there any alternative where i can redraw the whole screen with out using the GLES20.glClear(). Please let me know if there is any way to play around with GLES20.glClear() to improve the performance? 回答1: If you overwrite