glsl

multisampling and fragment shader

自闭症网瘾萝莉.ら 提交于 2019-12-08 19:08:29
Multisampling does not seem to work for fragments generated by a fragment shader. In the example below, the fragment shader is used to produce a check-board procedural texture. The outer edges of the square are properly antialiased, but the inner edges of the procedural texture are not. Is the fragment shader evaluated only per pixel? Or are the texture coordinates the same for each fragment of a given pixel? Below is the code and the image shows its output (notice that the procedural edges —between white and gray square— are not antialiased, whereas geometry edges —between black and white

Opengl shadow acne when using shadow2D

拟墨画扇 提交于 2019-12-08 18:19:32
I'm trying to implement shadow mapping in opengl. It's working, however aliasing is visible, so I decided to use sampler2Dshadow with shadow2D, cause I've read it serves as simple anti-aliasing solution. But as soon as I use it, it causes very significant shadow acne across whole scene. Note that when using sampler2D with texture2D, there's none. Is this intended? If so, how should I solved it? This is how it looks while using sampler2DShadow: This is part of fragment shader that handles shadows: float theta = clamp(dot(normalize(lightVector), normalize(vertNormal)), 0, 1); float bias = 0.005

GLSL memoryBarrierShared() usefulness?

随声附和 提交于 2019-12-08 17:23:11
问题 I am wondering about the usefulness of memoryBarrierShared. Indeed, when I am looking the documentation for barrier function : I read : For any given static instance of barrier in a compute shader, all invocations within a single work group must enter it before any are allowed to continue beyond it. This ensures that values written by one invocation prior to a given static instance of barrier can be safely read by other invocations after their call to the same static instance of barrier.

Which memory barrier does glGenerateMipmap require?

妖精的绣舞 提交于 2019-12-08 17:12:20
问题 I've written to the first mipmap level of a texture using GL_ARB_shader_image_load_store. The documentation states that I need to call glMemoryBarrier before I use the contents of this image in other operations, in order to flush the caches appropriately. For instance, before I do a glTexSubImage2D operation, I need to issue GL_TEXTURE_UPDATE_BARRIER_BIT​, and before I issue a draw call using a shader that samples that texture, I need to issue GL_TEXTURE_FETCH_BARRIER_BIT​. However, which

Are 1D Textures Supported in WebGL yet?

我的未来我决定 提交于 2019-12-08 15:24:30
问题 I've been trying to find a clear answer, but it seems no one has clearly asked the question. Can I use a 1D sampler and 1D texture in WebGL Chrome, Firefox, Safari, IE, etc? EDIT Understandably 1 is indeed a power of 2 (2^0=1) meaning you could effectively use a 2D sampler and texture using a height of 1 and a width of 256 or 512 etc. to replicate a 1D texture. 1D textures are not moot, they exist because they not only have a purpose, but are intended to translate into optimizations on the

Ray Tracing a scene with triangle meshes with webgl 2.0, deferred shading, framebuffers [closed]

人走茶凉 提交于 2019-12-08 13:11:07
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 12 months ago . after my original post was flagged as "too broad" by other stack overflow users. I will rephrase my question in less lines. I have implemented a ray marcher in shadertoy, and i understood all the math about the ray-object intersection. And i want to make the next step to ray

Morphing in WebGL shaders with mix() between more then two targets

∥☆過路亽.° 提交于 2019-12-08 13:02:08
问题 I am trying to build an image slider using three.js and am having difficulties with wrapping my head around passing the appropriate state to the glsl shaders so I can transition between the slides. I can easily do it between two targets (be it textures or models) with simply easing between 0 and 1 and passing it as an attrib float like this: attribute float mix; vec4 color = mix(tex1, tex2, mix); But I can't understand how to approach it with more then 2 targets. Should I pass a number and do

Implementing a LookAt function in the Vertex Shader with OpenGL

余生长醉 提交于 2019-12-08 12:35:12
问题 For purposes beyond my control, I need to calculate a ModelView Matrix in my vertex shader. I understand this is a bad idea but I don't have a choice right now. Here is the code in my vertex shader. Based on https://stackoverflow.com/a/6802424 mat4 lookAt(vec3 eye, vec3 center, vec3 up) { vec3 zaxis = normalize(center - eye); vec3 xaxis = normalize(cross(up, zaxis)); vec3 yaxis = cross(zaxis, xaxis); mat4 matrix; //Column Major matrix[0][0] = xaxis.x; matrix[1][0] = yaxis.x; matrix[2][0] =

WebGL: Particle engine using FBO, how to correctly write and sample particle positions from a texture?

帅比萌擦擦* 提交于 2019-12-08 09:59:01
问题 I suspect I'm not correctly rendering particle positions to my FBO, or correctly sampling those positions when rendering, though that may not be the actual problem with my code, admittedly. I have a complete jsfiddle here : http://jsfiddle.net/p5mdv/53/ A brief overview of the code: Initialization: Create an array of random particle positions in x,y,z Create an array of texture sampling locations (e.g. for 2 particles, first particle at 0,0, next at 0.5,0) Create a Frame Buffer Object and two

GLSL : Accessing an array in a for-loop hinders performance

十年热恋 提交于 2019-12-08 09:50:29
问题 Okay, so I'm developing an android app for a game I'm making (with LibGDX). And I have a fragment shader and I noticed that I had ~41 FPS. I was playing around with the code to see where the problem was, and I saw that changing how I accessed an array from arrayName[i] to arrayName[0] increased the performance back to 60 FPS, even though the loop only iterated once in this specific instance. Here's the code : #version 300 es precision highp float; uniform sampler2D u_texture; in vec2