glsl

Depth as distance to camera plane in GLSL

丶灬走出姿态 提交于 2019-11-28 07:04:09
I have a pair of GLSL shaders that give me the depth map of the objects in my scene. What I get now is the distance from each pixel to the camera. What I need is to get the distance from the pixel to the camera plane. Let me illustrate with a little drawing * |--* / | / | C-----* C-----* \ | \ | * |--* The 3 asterisks are pixels and the C is the camera. The lines from the asterisks are the "depth". In the first case, I get the distance from the pixel to the camera. In the second, I wish to get the distance from each pixel to the plane. There must be a way to do this by using some projection

Creating a smudge/liquify effect on mouse move that continuously animates back to the original state using webgl

扶醉桌前 提交于 2019-11-28 06:59:16
问题 I am trying to find information or examples that I can use to create a smudge/liquify effect that continuously animates back to the original state. Initially I was looking at using three.js or pixi.js to render some text and then use mouse events and ray casting to drag the mesh out of position, the closest thing I have found is this. https://codepen.io/shshaw/pen/qqVgbg let renderer = PIXI.autoDetectRenderer(window.innerWidth, window.innerHeight, { transparent: true }); I think that ideally

Omnidirectional shadow mapping with depth cubemap

时光总嘲笑我的痴心妄想 提交于 2019-11-28 06:38:20
I'm working with omnidirectional point lights. I already implemented shadow mapping using a cubemap texture as color attachement of 6 framebuffers, and encoding the light-to-fragment distance in each pixel of it. Now I would like, if this is possible, to change my implementation this way: 1) attach a depth cubemap texture to the depth buffer of my framebuffers, instead of colors. 2) render depth only, do not write color in this pass 3) in the main pass, read the depth from the cubemap texture, convert it to a distance, and check whether the current fragment is occluded by the light or not. My

OpenGL - How to access depth buffer values? - Or: gl_FragCoord.z vs. Rendering depth to texture

馋奶兔 提交于 2019-11-28 06:04:47
I want to access the depth buffer value at the currently processed pixel in a pixel shader. How can we achieve this goal? Basically, there seems to be two options: Render depth to texture. How can we do this and what is the tradeoff? Use the value provided by gl_FragCoord.z - But: Is this the correct value? On question 1: You can't directly read from the depth buffer in the fragment shader (unless there are recent extensions I'm not familiar with). You need to render to a Frame Buffer Object (FBO). Typical steps: Create and bind an FBO. Look up calls like glGenFramebuffers and

glsl sampler2DShadow and shadow2D clarification

别等时光非礼了梦想. 提交于 2019-11-28 06:04:40
Quick background of where I'm at (to make sure we're on the same page, and sanity check if I'm missing/assuming something stupid): Goal: I want to render my scene with shadows, using deferred lighting and shadowmaps. Struggle: finding clear and consistent documentation regarding how to use shadow2D and sampler2DShadow. Here's what I'm currently doing: In the fragment shader of my final rendering pass (the one that actually calculates final frag values), I have the MVP matrices from the pass from the light's point of view, the depth texture from said pass (aka the "shadow map"), and the

Writing to gl_FragColor causes INVALID_OPERATION on Android

大兔子大兔子 提交于 2019-11-28 05:35:59
问题 I'm trying to master OGLES2 for NDK and stuck with GLSL shaders. The situation is similar to the one already highlighted here, but it seems the reason behind it is somewhat different. I have the simpliest shaders possible. Vertex: #version 110 attribute vec3 vPosition; void main(void) { gl_Position = vec4(vPosition, 1.0); gl_FrontColor = gl_BackColor = vec4(0.3, 0.3, 0.3, 1); // *** } Fragment: #version 110 void main(void) { gl_FragColor = gl_Color; } Easy and straight-forward. I even define

Rendering to cube map

ぐ巨炮叔叔 提交于 2019-11-28 05:33:23
According to ARB_geometry_shader4 it is possible to render a scene onto the 6 faces of a cube map with a geometry shader and the cube map attached to a framebuffer object. I want to create a shadow map using this way. However there seems to be a conflict that I can't resolve: I can only attach a texture with GL_DEPTH_COMPONENT as internal type to the GL_DEPTH_ATTACHMENT_EXT. A depth texture can only be 1D or 2D. If I want to attach a cube map, all other attached textures must be cube maps as well. So it looks like I can't use any depth testing when I want to render to a cube map. Or what

How to Using the #include in glsl support ARB_shading_language_include

给你一囗甜甜゛ 提交于 2019-11-28 05:27:38
I wan't to use the #include macro to include shader files in glsl, and I heard there is a ARB_shading_language_include extension support the #include macro. Is there anyone can give me a code snippet showing how to use the #include macro? The first thing you need to understand about shading_language_include is what it isn't . It is not "I #include a file from the disk." OpenGL doesn't know what files are; it has no concept of the file system. Instead, you must pre-load all of the files you might want to include. So you have a shader string and a filename that you loaded the string from.

volume rendering (using glsl) with ray casting algorithm

无人久伴 提交于 2019-11-28 04:34:38
I am learning volume rendering using ray casting algorithm. I have found a good demo and tuturial in here . but the problem is that I have a ATI graphic card instead of nVidia which make me can't using the cg shader in the demo, so I want to change the cg shader to glsl shader. I have gone through the red book (7 edition) of OpenGL, but not familiar with glsl and cg. does anyone can help me change the cg shader in the demo to glsl? or is there any materials to the simplest demo of volume rendering using ray casting (of course in glsl). here is the cg shader of the demo. and it can work on my

GLSL gl_FragCoord.z Calculation and Setting gl_FragDepth

旧时模样 提交于 2019-11-28 04:24:47
So, I've got an imposter (the real geometry is a cube, possibly clipped, and the imposter geometry is a Menger sponge) and I need to calculate its depth. I can calculate the amount to offset in world space fairly easily. Unfortunately, I've spent hours failing to perturb the depth with it. The only correct results I can get are when I go: gl_FragDepth = gl_FragCoord.z Basically, I need to know how gl_FragCoord.z is calculated so that I can: Take the inverse transformation from gl_FragCoord.z to eye space Add the depth perturbation Transform this perturbed depth back into the same space as the