glsl

Deferred Shading and attenuation

て烟熏妆下的殇ゞ 提交于 2019-12-14 03:42:31
问题 Recently I added deferred shading support in my engine; however I ran into some attenuation issues: As you can see, when I'm rendering the light volume (sphere), it doesn't blend nicely with the ambient part of the image ! Here is how I declare my point light: PointLight pointlight; pointlight.SetPosition(glm::vec3(0.0, 6.0, 0.0)); pointlight.SetIntensity(glm::vec3(1.0f, 1.0f, 1.0f)); Here is how I compute the light sphere radius: Attenuation attenuation = pointLights[i].GetAttenuation();

OpenGL Fragment Shader not working correctly - Unable to draw any color other than white

拜拜、爱过 提交于 2019-12-14 03:28:45
问题 Basic Description of Problem I don't seem to be able to draw a triangle in any color other than white. Here is my fragment shader code. #version 330 core out vec3 color; void main() { color = vec3(1.0, 0.0, 0.0); } For the sake of clarity, I have not included any other code. My vertex shader works - I can see a white triangle on the screen. I am new to the programmable pipeline way of using OpenGL. More Details and main.cpp Code It has been suggested that the fault may be that my program

Why are atomic counters and images referred as uniforms when they are actually not uniform?

给你一囗甜甜゛ 提交于 2019-12-14 02:48:13
问题 Atomic counters and images can be written to in shaders... so they are not constant(uniform). Why are they called uniforms then? 回答1: You are thinking of uniform from the wrong perspective. While true that uniforms are constant, their more important characteristic is that they provide... uniform variable storage across all invocations of a shader. uniform is after all, nothing but a storage qualifier, the same as in or out . Both of the data types you mention belong to a special class GLSL

How can I read float data with glReadPixels

╄→гoц情女王★ 提交于 2019-12-14 02:33:31
问题 I've been trying to read float data for a couple of days with glReadPixels . My cpp code: //expanded to whole screen quad via vertex shader glDrawArrays( GL_TRIANGLES, 0, 3 ); int size = width * height; GLfloat* pixels = new GLfloat[ size ]; glReadPixels( 0, 0, width, height, GL_RED, GL_FLOAT, pixels ); pixelVector.resize( size ); for ( int i = 0; i < size; i++ ) { pixelVector[i] = (float) pixels[i]; } and my shader code: out float data; void main() { data = 0.02; } Strangely I get 0.0196078

Passing uniform 4x4 matrix to vertex shader program

眉间皱痕 提交于 2019-12-14 00:19:47
问题 I am trying to learn OpenGL and following this: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/ Up until the point where they started passing matrices to the vertex shader to translate the triangle they where drawing I was following along. This is the shader program where it starts to go wrong: #version 330 core layout(location = 0) in vec3 vertexPosition_modelspace; uniform mat4 MVP; void main(){ vec4 v = vec4(vertexPosition_modelspace,1); // Transform an homogeneous

GLSL array uniform with a dynamic length

拥有回忆 提交于 2019-12-13 20:22:30
问题 I'm trying to implement a "fog of war" feature into my strategy game. I've been reading some other Q&A on SO and I figured that I need to use custom GLSL shaders. So I defined and array containing "vision points": let vision_points = [ new THREE.Vector2(1500, 1500), new THREE.Vector2(1500, -1500), new THREE.Vector2(-1500, 1500), new THREE.Vector2(-1500, -1500) ] And used ShaderMaterial passing in the vision points as a uniform. Since the arrays length might be any value, I inject VPOINTSMAX

GLSL : How to bind thousands of buffers properly?

六月ゝ 毕业季﹏ 提交于 2019-12-13 19:25:18
问题 I came up with an idea that requires to bind thousands of buffers (Atomic counters and Shader storage ones) to one GLSL program. I first checked if this would make any sense in the limitations of openGL and it seems possible for two reasons : On my laptop GL_MAX_ATOMIC_COUNTER_BUFFER_BINDINGS and GL_MAX_SHADER_STORAGE_BUFFER_BINDINGS both are around 32k. So openGL is enclined to let me bind thousands of buffers for one pass. openGL 4.4 comes up with : void BindBuffersBase(enum target, uint

WebGL textureCube bias causing seams

泪湿孤枕 提交于 2019-12-13 16:32:26
问题 I am experimenting with a dds texture and cubemap mip maps. When changing the bias in textureCube() i get really nasty normal artifacts. I have no idea what is causing this and cant find much reference on the bias parameter. Live: (need to switch to uv and turn off normal) http://dusanbosnjak.com/test/webGL/new/poredjenjeNM/poredjenjeNormalaBias.html Screen: More Screens: Bias 4 edit Also of note, when you orbit around at say bias 6, you can clearly see the cubemap looking more or less

GLSL - Test Fragment Values

给你一囗甜甜゛ 提交于 2019-12-13 16:26:30
问题 Say you have a vec3 colourIn going from a vertex shader to a frag shader , is there a way to test a value and overwrite it if you want to? For example, set any fragment that has a blue value larger than 0.5 to the colour white? In my Shader.frag I implemented this test: if(colourIn.b>0.5){ //or if(greaterThan(colourIn.b,0.5)) colourIn.b=0.0; } It compiles and renders the scene, but I can't tell if it's worked because I'm colourblind (lol)... Have I got the theory right and implemented it

Trying to make alienrain in python using the opengl function glMapBufferRange

瘦欲@ 提交于 2019-12-13 16:18:47
问题 Just 4 little lines causing a problem with the alien rain program that I ported from the OpenGL Superbible. It seems I am having issues trying to write to memory after using the function glMapBufferRange Update: Excellent code by Rabbid76 has solved the problem and provided valuable insight of explanation. Thank You. Required files: ktxloader.py , aliens.ktx Source code of alienrain.py #!/usr/bin/python3 import sys import time sys.path.append("./shared") #from sbmloader import SBMObject #