shader

Basic shadow mapping artifacts using OpenGL and GLSL

安稳与你 提交于 2019-12-11 02:45:20
问题 I've written a simple OpenGL test application about basic shadow mapping technique. I have removed most artifacts except for the one on the occluder back face. This back face is concerned by artifacts because during the first rendering pass (shadow depth map filling) I enable the front face culling. Consequently I have self-shadowing z-fighting artifacts. To solve this kind of problem it said on several tutorials the depth of the vertex position in light space need to be biased with a very

Count image similarity on GPU [OpenGL/OcclusionQuery]

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-11 02:30:13
问题 OpenGL. Let's say I've drawn one image and then the second one using XOR. Now I've got black buffer with non-black pixels somewhere, I've read that I can use shaders to count black [ rgb(0,0,0) ] pixels ON GPU? I've also read that it has to do something with OcclusionQuery. http://oss.sgi.com/projects/ogl-sample/registry/ARB/occlusion_query.txt Is it possible and how? [any programming language] If you've got other idea on how to find similarity via OpenGL/GPU - that would be great too. 回答1: I

Alpha Blending Layers for Linear Light Mode

寵の児 提交于 2019-12-11 02:26:54
问题 I'm recreating some Photoshop blending and I'm trying to use Linear Light mode. In Photoshop you'd have a background layer at 100% opacity and then a 50% opacity top layer that is set to Linear Light as the blend mode. I did find info on how to do the Linear Light blend, but it only works when both layers are at 100% opacity. Here is the shader code that will do Linear Light mode and it gives the same result as Photoshop when layers are both at 100% opacity: #define BlendLinearDodgef

OpenGL: render time limit on linux

橙三吉。 提交于 2019-12-11 02:19:28
问题 I'm implementing some computation algorithm via OpenGL and Qt. All computations are executed in fragment shader. Sometimes when i trying to execute some hard computations (that takes more than 5 seconds on GPU) OpenGL breaks computation before it ends. I suppose this is system like TDR from Windows. I think that i should split input data by several parts but i need to know how long computation allowed. How i can obtain render time limit on linux (it will be cool if there is crossplatform

Setting up basic shader program - GLchar and file_contents undefined?

亡梦爱人 提交于 2019-12-11 01:34:56
问题 I am trying to experiment using basic shaders in my program, I came across a nice tutorial that talks you through writing a basic shader "util class" i guess you would call it? Which should allow me to apply a vertex and fragment shader...So I linked glew to my project (i have also glu, glut and glaux included) and inserted the following into a header file #include "include\gl\glew.h" #include <math.h> #include <stdio.h> #include <stdlib.h> static struct { /* ... fields for buffer and texture

GLSL textureCube and texture2D in same shader

让人想犯罪 __ 提交于 2019-12-11 01:12:40
问题 I can't seem to be able to have both texture2D() and textureCube() in one shader. When I do, nothing shows up and there is no error. I tried this both with my own shader loader and the Apple GLSL shader builder and the same thing happens. It happens even if I have textureCube() in the vertex shader and texture2D() in the fragment. They seem to work fine by themselves, but as soon as they're called together, no matter in which order, nothing shows up. 回答1: You need to bind both textures as

WebGL2 — How to store and retrieve 3D texture data needed by 3D grid of vertices to calculate new vertex positions

不问归期 提交于 2019-12-11 01:03:50
问题 3D Physics simulation needs access to neighbor vertices' positions and attributes in shader to calculate a vertex's new position. 2D version works but am having trouble porting solution to 3D. Flip-Flopping two 3D textures seems right, inputting sets of x,y and z coordinates for one texture, and getting vec4s which contains position-velocity-acceleration data of neighboring points to use to calculate new positions and velocities for each vertex. The 2D version uses 1 draw call with a

Unity Post-processing PostProcessEffectRenderer shows in Editor but not in build

放肆的年华 提交于 2019-12-11 00:58:35
问题 After adding an implementation of a PostProcessEffectRenderer to the Unity post-processing stack the effect works perfectly in the Unity Editor, but does not show in the built game. Changes to build quality have no effect, effect does not show using maximum quality settings, building for Windows x86_64. Grayscale.cs using System; using UnityEngine; using UnityEngine.Rendering.PostProcessing; [Serializable] [PostProcess(typeof(GrayscaleRenderer), PostProcessEvent.AfterStack, "Custom/Grayscale"

Transform to NDC, calculate and transform back to worldspace

谁都会走 提交于 2019-12-11 00:06:36
问题 I have a problem moving world coordinates to ndc coordinates than calculate something with it and move it back inside the shader. The Code looks like that: vec3 testFunc(vec3 pos, vec3 dir){ //pos and dir are in worldspace, convert to NDC vec4 NDC_dir = MVP * vec4(dir,0); vec4 NDC_pos = MVP * vec4(pos,1); NDC_dir /= NDC_dir.w; NDC_pos /= NDC_pos.w; //... do some caclulations => get newPos in NDC //Transform newPos back to worldspace vec4 WS_newPos = inverse(MVP) * vec4(newPos,1); return WS

Fixing GLSL shaders for Nvidia and AMD

时间秒杀一切 提交于 2019-12-10 23:00:28
问题 I am having problems getting my GLSL shaders to work on both AMD and Nvidia hardware. I am not looking for help fixing a particular shader, but how to generally avoid getting these problems. Is it possible to check if a shader will compile on AMD/Nvidia drivers without running the application on a machine with the respective hardware and actually trying it? I know, in the end, testing is the only way to be sure, but during development I would like to at least avoid the obvious problems.