glsl

Using both programmable and fixed pipeline functionality in OpenGL

五迷三道 提交于 2019-12-10 15:37:24
问题 I have a vertex shader that transforms vertices to create a fisheye affect. Is is possible to just use just the vertex shader and use fixed pipeline for the fragment portion. So basically i have an application that doesnt use shaders. I want to apply a fisheye affect using a vertex shader to transform all vertices, and then leave it to the application to take care to lighting, texturing, etc? If this is not possible, is it possible to get a fisheye affect by messing with the contents of the

Opengl shader problems - weird light reflection artifacts

我怕爱的太早我们不能终老 提交于 2019-12-10 14:56:31
问题 I've been wrestling with this for days. I think I've finally narrowed it down to a problem with the per vertex tangents, but I'm not sure the best way to fix it. Context is iPhone app, opengl es2 using my own engine. My shader is a bump map (normal map) variety using the supplied per vertex tangent to create a TBN matrix. The vertex shader transforms the light vectors and eye vectors to tangent space, passes them to the fragment shader and calculates the lights. But some geometry in my first

OpenGL 3.+ glsl compatibility mess?

老子叫甜甜 提交于 2019-12-10 14:44:08
问题 So, I googled a lot of opengl 3.+ tutorials, all incorporating shaders (GLSL 330 core). I however do not have a graphics card supporting these newer GLSL implementations, either I have to update my driver but still I'm not sure if my card is intrinsically able to support it. Currently my openGL version is 3.1, and I created on windows with C++ a modern context with backwards compatibility. My GLSL version is 1.30 via NVIDIA Cg compiler (full definition), and GLSL 1.30 -> version 130. The

GLSL: check if an extension is supported

狂风中的少年 提交于 2019-12-10 14:34:28
问题 You can't use an unsupported extension, driver will return compilation error. But can you check availability of some extension directly from GLSL code? Is there something like this? #version XXX core #if supported(EXT_some_extension) #extension EXT_some_extension: enable #endif ... UPDATE: According to Nicol 's Bolas answer. Yes, that appeared in my mind too, but for some reason, it is not working #version 150 core #extension ARB_explicit_attrib_location : enable #ifdef ARB_explicit_attrib

OpenGL shimmering pixels artifact

限于喜欢 提交于 2019-12-10 14:18:54
问题 I'm trying to implement ambient occlusion on a voxel-based mesh and get these flickering white pixels at the edges of faces: Here is my fragment shader: #version 120 varying vec4 color; varying vec4 normal; void main(void) { float light = normal.w + max(0.15*dot(normal.xyz, vec3(1,1,1)), 0.0); gl_FragColor = vec4(color.xyz * light, 1.0); } If I remove the light from the gl_FragColor vec4 then the artifact disappears. The light value is calculated from the ambient occlusion value ( normal.w )

OpenGL shadow peter-panning

风流意气都作罢 提交于 2019-12-10 14:13:23
问题 I'm adding shadows to a scene in OpenGL by doing two draw passes, one to a depth map and one to the normal frame buffer. Without using a bias when using the depth map, there is a lot of shadow acne. This is fixed by adding a bias to the depth map check. However, this causes the shadow to 'detach' from the object when the light is moved to a different angle. I believe this effect is called peter-panning and is caused by a larger bias being used for different angles. The usual fix for this

Using different texture types in same texture unit at the same time in shader

╄→гoц情女王★ 提交于 2019-12-10 13:12:04
问题 I came across a nasty problem in my program when i tried to use the same texture unit (number 0) for different texture types (i.e. a normal 2D texture and a cube map) in my shader. It appeared so that the GL issues a 502H (Invalid Operation) after the first glDrawArrays call. In my application code i load up the textures to different texture targets: void setup_textures() { unsigned int width, height; int components; unsigned int format; float param[8]; vector<unsigned char> pngData; GLenum

Cost of Branching on uniforms on modern GPUs

纵然是瞬间 提交于 2019-12-10 12:44:50
问题 When using GLSL on modern (GL3.3+) GPUs, what is the likely cost of branching on a uniform? In my engine I'm getting to the point where I have a lot of shaders. And I have several different quality presets for a lot of those. As it stands, I'm using uniforms with if() in the shaders to choose different quality presets. I'm however worried that I might achieve better performance by recompiling the shaders and using #ifdef. The problem with that is the need to worry about tracking and resetting

Three.js - apply shader to blur a geometry

馋奶兔 提交于 2019-12-10 12:17:45
问题 Been learning ThreeJS over the past day or so however I'm struggling with Shaders. I'm trying to blur a geometry i have. I tried using Depth Of Field with the examples found on the Three.js site but it made my foreground objects slightly blurry too. So I'm hoping to single out one object and just blur that. Now I have a mesh that i created with a LambertMaterial basically like so: var material = new THREE.MeshLambertMaterial({ color: 0x5c5c5c, emissive: 0x000000, shading: THREE.FlatShading,

Get the last frame color from GLSL

那年仲夏 提交于 2019-12-10 12:09:12
问题 I want to processing a texture in the Fragment Shader. However, current frame should base on last frame information, such as neighbor positions. So I need write current frame into one place/buffur/object and read it in next loop. Can someone give me a direction about my requirement? 回答1: Use Frame Buffer Objects. Create two FBOs into which you render alternatingly, each time binding the other one as texture for sourcing the data. 来源: https://stackoverflow.com/questions/8291314/get-the-last