fbo

glGenFramebuffers() access violation when using GLFW + GLEW

╄→гoц情女王★ 提交于 2021-02-19 03:39:07
问题 I am getting this error: "Access violation executing location 0x00000000." when I use GLFW + GLEW on Windows. I am using Windows 7. I also have my own implementation (from scratch) that that creates a window, initializes OpenGL context, initializes GLEW, etc... and everything works fine. So of course my video card has the Frame Buffer capability and everything is pretty fine with the drivers... the problem only happens when I try to use GLFW. Any suggestion? The code: void start() { if(

Can't use glGetTexImage to read a depth map texture

霸气de小男生 提交于 2021-02-11 15:14:43
问题 I've successfully created a frame buffer object and created a texture containing the depth map of my scene. I know I'm doing this correctly since I apply my texture to a rectangle and it looks just fine : Now I'd like to read the values from the texture I've created in my main program and not inside the shaders. Looking into it I found out about two functions, namely: glGetTexImage() and glReadPixels(). Since I'm quite new to openGL, the documentation about these brings more questions than

OpenGL Driver Monitor says textures are rapidly increasing. How to find the leak?

*爱你&永不变心* 提交于 2020-01-24 09:32:35
问题 When I run my app, OpenGL Driver Monitor says the Textures count is rapidly increasing — within 30 seconds the Textures count increases by about 45,000. But I haven't been able to find the leak. I've instrumented every glGen*() call to print out every GL object name it returns — but they're all less than 50, so apparently GL objects created by glGen*() aren't being leaked. It's a large, complex app that renders multiple shaders to multiple FBOs on shared contexts on separate threads, so

OpenGL Driver Monitor says textures are rapidly increasing. How to find the leak?

烈酒焚心 提交于 2020-01-24 09:31:12
问题 When I run my app, OpenGL Driver Monitor says the Textures count is rapidly increasing — within 30 seconds the Textures count increases by about 45,000. But I haven't been able to find the leak. I've instrumented every glGen*() call to print out every GL object name it returns — but they're all less than 50, so apparently GL objects created by glGen*() aren't being leaked. It's a large, complex app that renders multiple shaders to multiple FBOs on shared contexts on separate threads, so

Can an OpenGLES 2.0 framebuffer be bound to a texture and a renderbuffer at the same time?

那年仲夏 提交于 2020-01-13 11:22:49
问题 Brad Larson provides some great code here and here for 'rendering your scene into a texture-backed framebuffer', but it's not clear whether this is the same framebuffer that I use for the rest of the drawing. If you attach a renderbuffer to a framebuffer, can the framebuffer also render into a texture with the same call? 回答1: Sounds like you might be a bit confused with FBO usage. If you need it, this should get you started: Apple Developer - Drawing offscreen. This could help too.

Ambiguous results with Frame Buffers in libgdx

廉价感情. 提交于 2020-01-04 09:25:30
问题 I am getting the following weird results with the FrameBuffer class in libgdx. Here is the code that is producing this result: // This is the rendering code @Override public void render(float delta) { Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); stage.act(); stage.draw(); fbo.begin(); batch.begin(); batch.draw(heart, 0, 0); batch.end(); fbo.end(); test = new Image(fbo.getColorBufferTexture()); test.setPosition(256, 256); stage.addActor(test); } //This is the

Ambiguous results with Frame Buffers in libgdx

橙三吉。 提交于 2020-01-04 09:25:14
问题 I am getting the following weird results with the FrameBuffer class in libgdx. Here is the code that is producing this result: // This is the rendering code @Override public void render(float delta) { Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT); stage.act(); stage.draw(); fbo.begin(); batch.begin(); batch.draw(heart, 0, 0); batch.end(); fbo.end(); test = new Image(fbo.getColorBufferTexture()); test.setPosition(256, 256); stage.addActor(test); } //This is the

Reading the pixels values from the Frame Buffer Object (FBO) using Pixel Buffer Object (PBO)

ぃ、小莉子 提交于 2019-12-29 04:42:07
问题 Can I use Pixel Buffer Object (PBO) to directly read the pixels values (i.e. using glReadPixels) from the FBO (i.e. while FBO is still attached)? If yes, What are the advantages and disadvantages of using PBO with FBO? What is the problem with following code { //DATA_SIZE = WIDTH * HEIGHT * 3 (BECAUSE I AM USING 3 CHANNELS ONLY) // FBO and PBO status is good . . glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId); //Draw the objects Following glReadPixels works fine glReadPixels(0, 0, screenWidth

Render OpenGL scene to texture using FBO in fixed function pipeline drawing

懵懂的女人 提交于 2019-12-28 03:14:08
问题 The Problem I work on the open source game torcs (http://torcs.sourceforge.net/). The game's graphic pipeline is still using the fixed function pipeline (FFP) of OpenGL 1.3. I try to render the game scenes to textures in a FBO (Framebuffer Object) in order to do some post-processing on the rendered textures. I use OpenGL 3.3. on my machine. Currently I have set up the FBO with attached textures at GL_COLOR_ATTACHMENT0&1 (2 in order to have two consecutive frames readable in the shader) and an

OpenGL | Render Vertex at UV coordinate

坚强是说给别人听的谎言 提交于 2019-12-25 09:27:46
问题 I'm using OpenGL and I need to render the vertecies of a 3D model to a FBO at the UV coordinate of the vertex. To do that, I first have to convert the UV coordinate space to the screen space. I came to the conclusion that: uv.x * 2 - 1 uv.y * 2 - 1 …should do the trick. I used that in my vertex shader to place the vertex at those new positions. The result looks like this: …while it should should look like this: It seems like it's scaled up. I dont know where the problem is. 回答1: Are you