depth-buffer

How to efficiently copy depth buffer to texture on OpenGL ES

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-01 00:43:13
问题 I'm trying to get some shadowing effects to work in OpenGL ES 2.0 on iOS by porting some code from standard GL. Part of the sample involves copying the depth buffer to a texture: glBindTexture(GL_TEXTURE_2D, g_uiDepthBuffer); glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, 800, 600, 0); However, it appears the glCopyTexImage2D is not supported on ES. Reading a related thread, it seems I can use the frame buffer and fragment shaders to extract the depth data. So I'm trying to

OpenGL ES 2.0 displaying objects in opposite depth order using LibGDX

夙愿已清 提交于 2019-12-25 02:27:05
问题 I am using LibGDX and rendering a few models. This works as expected, except objects that are "further away" are displayed "in front" of "closer" objects. In other words the depth order seems to be the opposite of what I intended for it to be. Strangely, the models are clipped in the correct order by the far clipping-plane. (Most distance objects disappear first) I have tried enabling GL_DEPTH_TEST , and I am clearing GL_DEPTH_BUFFER_BIT . Does anyone know what could be causing this? 回答1: If

How to write to zbuffer only with three.js

老子叫甜甜 提交于 2019-12-24 13:45:03
问题 I'm trying to use thee.js to only update the zbuffer (I'm using preserveDrawingBuffer to create a trace effect). However I can't find any way to only write to the zbuffer with the standard materials, so far I've tried: setting the material's visible to false, which stops the object rendering. setting the material's opacity to 0.0, which means nothing gets rendered. Is there a 'standard' way of doing this, or do I need to use a custom fragment shader? 回答1: You can render to the depth buffer

Writing the correct value in the depth buffer when using ray-casting

守給你的承諾、 提交于 2019-12-23 15:27:53
问题 I am doing a ray-casting in a 3d texture until I hit a correct value. I am doing the ray-casting in a cube and the cube corners are already in world coordinates so I don't have to multiply the vertices with the modelviewmatrix to get the correct position. Vertex shader world_coordinate_ = gl_Vertex; Fragment shader vec3 direction = (world_coordinate_.xyz - cameraPosition_); direction = normalize(direction); for (float k = 0.0; k < steps; k += 1.0) { .... pos += direction*delta_step; float

Kinect V2 Depth Frame Pixel Size

◇◆丶佛笑我妖孽 提交于 2019-12-23 05:37:16
问题 The kinect v2 provides a depth frame with the resolution of 512 x 424 pixels with a fov of 70.6 x 60 degrees resulting in an average of about 7 x 7 pixels per degree. [Source]. However I was unable to find any kind of information about the pixel size of the depth frame, or is there any kind of method to calculate the pixel size from the given information? 回答1: Are you asking how you to map size of pixels in the depth data? The depth coordinate system is orthogonal with it's origin and

OpenGLES 1.1 with FrameBuffer / ColorBuffer / DepthBuffer for Android with NDK r7b

不打扰是莪最后的温柔 提交于 2019-12-23 04:32:44
问题 After reading over the NDK docs and all my books on OpenGLES ive hit a wall. I am trying to copy my iOS OpenGLES set up to Android NDK R7 and above, mainly to get the depth buffer i overlooked earlier on when coding. The problem is i loose the textures on some objects when i enable the color buffer as seen below and the depth buffer isn't working when i send objects into the background. I am using OGLES 1.1 FFP and NDK R7 or above Here is my initialization code :- int32_t ES1Renderer:

Implementing depth testing for semi-transparent objects

ε祈祈猫儿з 提交于 2019-12-23 03:03:22
问题 I've been carefully trolling the internet for the past two days to understand depth testing for semi-transparent objects. I've read multiple papers/tutorials on the subject and in theory I believe I understand how it works. However none of them give me actual example code. I have three requirements for my depth testing of semi-transparent objects: It should be order independant. It should work if two quads of the same objects are intersection each other. Both semi-transparent. Imagine a grass

Rendering to depth texture - unclarities about usage of GL_OES_depth_texture

旧时模样 提交于 2019-12-22 18:03:17
问题 I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0. I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer

Linearize depth

亡梦爱人 提交于 2019-12-22 12:25:22
问题 In OpenGL you can linearize a depth value like so: float linearize_depth(float d,float zNear,float zFar) { float z_n = 2.0 * d - 1.0; return 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear)); } (Source: https://stackoverflow.com/a/6657284/10011415) However, Vulkan handles depth values somewhat differently (https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/). I don't quite understand the math behind it, what changes would I have to make to the function to linearize a

How can I read the depth buffer in WebGL?

馋奶兔 提交于 2019-12-19 02:08:41
问题 Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting? 回答1: Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE. General usage: Preparation: Query the extension (required) Allocate a separate color and depth texture ( gl.DEPTH_COMPONENT ) Combine both textures