glsl

Linearize depth

亡梦爱人 提交于 2019-12-22 12:25:22
问题 In OpenGL you can linearize a depth value like so: float linearize_depth(float d,float zNear,float zFar) { float z_n = 2.0 * d - 1.0; return 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear)); } (Source: https://stackoverflow.com/a/6657284/10011415) However, Vulkan handles depth values somewhat differently (https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/). I don't quite understand the math behind it, what changes would I have to make to the function to linearize a

Passing an array of vec2 to shader in THREE.js

妖精的绣舞 提交于 2019-12-22 11:32:05
问题 I've been searching the web for a while now and did not find the correct answer yet. I found the list of uniform types THREE.js uses, and I think the following code should be correct. At the last line I define an uniform array of Vector2. uniforms: { "center": { type: "v2", value: new THREE.Vector2( 0.5, 0.5 ) }, "aspectRatio": { type: "f", value: null }, "radius": { type: "f", value: 0.1 }, "pointList": { type: "v2v", value: [] }, }, In my js script I pass this array as follows. This should

Is this a practical and enough performant shader for doing blur on mobile device

我是研究僧i 提交于 2019-12-22 09:58:07
问题 I am trying to implement Blur effect in my game on mobile devices using GLSL shader. I don't have any former experience with writing shaders. And I don't understand if my shader is enough good. Actually I have copyied the GLSL code from a tutorial and I don't know it this tutorial is for vivid demo or also can be used in practice. Here is the code of two pass blur shader that uses Gaussian weights (http://www.cocos2d-x.org/wiki/User_Tutorial-RenderTexture_Plus_Blur): #ifdef GL_ES precision

glsl double-precision vertex buffer

微笑、不失礼 提交于 2019-12-22 09:51:41
问题 If I create a double-precision vertex buffer, for example: GLuint vertBuffer, spanBuffer, spanCount, patchSize, program; // already setup glUseProgram (program); glEnableClientState (GL_VERTEX_ARRAY); glBindBuffer (GL_ARRAY_BUFFER, vertBuffer); glVertexPointer (3, GL_DOUBLE, 0, 0); glPatchParameteri (GL_PATCH_VERTICES, patchSize); glBindBuffer (GL_ELEMENT_ARRAY_BUFFER, spanBuffer); glDrawElements (GL_PATCHES, spanCount * patchSize, GL_UNSIGNED_INT, 0); How do I access the double precision

Passing custom type (struct) uniform from Qt to GLSL using QGLShaderProgram

坚强是说给别人听的谎言 提交于 2019-12-22 08:59:51
问题 I defined a struct for light parameters which contains two vectors. The struct is defined in both C++ and GLSL in an analogous way (note: QVector3D encapsulates 3 float s, not double s): C++ host program: struct LightParameters { QVector3D pos; QVector3D intensity; }; Fragment Shader: struct LightParameters { vec3 pos; vec3 intensity; }; In the fragment shader, I also define the following uniforms. The numbers of lights is limited to 8, so the uniform array has a constant size (but only

How to correctly make a depth cubemap for shadow mapping?

烈酒焚心 提交于 2019-12-22 08:36:55
问题 I have written code to render my scene objects to a cubemap texture of format GL_DEPTH_COMPONENT and then use this texture in a shader to determine whether a fragment is being directly lit or not, for shadowing purposes. However, my cubemap appears to come out as black. I suppose I am not setting up my FBO or rendering context sufficiently, but fail to see what is missing. Using GL 3.3 in compatibility profile. This is my code for creating the FBO and cubemap texture: glGenFramebuffers(1,

AlphaFunctions in WebGL?

时光总嘲笑我的痴心妄想 提交于 2019-12-22 08:29:29
问题 Is it possible to achieve an transparency effect where fragments with alpha lower than 0.5 are discarded and fragments with alpha higher than 0.5 are rendered rendered opaque? From what I've read, glEnable(GL_ALPHA_TEST); glAlphaFunc(GL_GREATER, 0.5); would be what I'm looking for but unfortunately, AlphaFunction is not defined in WebGL. Is there a workaround? My problem is, that transparent fragments write into the depth buffer and thus prevent farther fragments from beeing rendered: alpha

Compute Shader write to texture

只谈情不闲聊 提交于 2019-12-22 05:32:12
问题 I have implemented CPU code that copies a projected texture to a larger texture on a 3d object, 'decal baking' if you will, but now I need to implement it on the GPU. To do this I hope to use compute shader as its quite difficult to add an FBO in my current setup. Example image from my current implementation This question is more about how to use Compute shaders but for anyone interested, the idea is based on an answer I got from user jozxyqk, seen here: https://stackoverflow.com/a/27124029

OpenGLES 2.0: gl_VertexID equivalent?

て烟熏妆下的殇ゞ 提交于 2019-12-22 05:10:13
问题 I'm trying to create a grid of points by calculating vertex positions dynamically, based on their index in the array of vertices sent to the shader. Is there an equivalent of the gl_VertexID variable that I can call from within my shader? Or another way of accessing their position in the array without having to send more data to the GPU? Thank, Josh. Here's my vertex shader: attribute vec4 vertexPosition; uniform mat4 modelViewProjectionMatrix; vec4 temp; uniform float width; void main() {

OpenGLES 2.0: gl_VertexID equivalent?

前提是你 提交于 2019-12-22 05:10:07
问题 I'm trying to create a grid of points by calculating vertex positions dynamically, based on their index in the array of vertices sent to the shader. Is there an equivalent of the gl_VertexID variable that I can call from within my shader? Or another way of accessing their position in the array without having to send more data to the GPU? Thank, Josh. Here's my vertex shader: attribute vec4 vertexPosition; uniform mat4 modelViewProjectionMatrix; vec4 temp; uniform float width; void main() {