glsl

Shader attribute mat4 not binding correctly (Opengl ES 2.0 Android)

牧云@^-^@ 提交于 2019-12-11 02:52:43
问题 I have the following shader: protected final static String vertexShaderCode = "attribute vec4 vPosition;" + "attribute vec2 texCoord;" + "attribute mat4 uMVPMatrix; \n" + "varying vec2 vTexCoord;" + "void main() {" + " gl_Position = uMVPMatrix * vPosition;" + " vTexCoord = texCoord;" + "}"; I want to pass in the mvp matrix as an attribute, however it doesn't seem to be bound correctly. I'm using auto-assigned binding. When I query the attribute locations after linking the program as follows:

Basic shadow mapping artifacts using OpenGL and GLSL

安稳与你 提交于 2019-12-11 02:45:20
问题 I've written a simple OpenGL test application about basic shadow mapping technique. I have removed most artifacts except for the one on the occluder back face. This back face is concerned by artifacts because during the first rendering pass (shadow depth map filling) I enable the front face culling. Consequently I have self-shadowing z-fighting artifacts. To solve this kind of problem it said on several tutorials the depth of the vertex position in light space need to be biased with a very

Max programs in GLSL ES

喜你入骨 提交于 2019-12-11 02:09:37
问题 What is the maximum number of programs that can be compiled in GLSL ES? So lets say I create 100 fragment shaders, each is a different effect. So at runtime I compile all of them and dynamically I swap them with glUseProgram. I assume that everytime I compile a new GLSL-ES program it holds in somwhere in the GPU. Is there any maximum number of the amount of active compiled programs? 回答1: There is no maximum limit. Only limitation is available memory or other resources that is controlled by

GLSL packing 4 float attributes into vec4

邮差的信 提交于 2019-12-11 01:44:44
问题 I have a question about resource consumption of attribute float in glsl. Does it take as many resources as vec4 , or no? I ask this, because uniforms takes https://stackoverflow.com/a/20775024/1559666 (at least, they could) If it is not, then does it makes any sense to pack 4 float 's into one vec4 attribute? 回答1: Yes, all vertex attributes require some multiple of a 4-component vector for storage. This means that a float vertex attribute takes 1 slot the same as a vec2 , vec3 or vec4 would.

OpenGL degenerate GL_TRIANGLES sharing same vertices

若如初见. 提交于 2019-12-11 01:25:53
问题 I send a VertexBuffer+IndexBuffer of GL_TRIANGLES via glDrawElements() to the GPU. In the vertex shader I wanted snap some vertices to the same coordinates to simplify a large mesh on-the-fly. As result I expeceted a major performance boost because a lot of triangle are collapsing to the same point and would be degenerated. But I don't get any fps gain. Due testing I set my vertex shader just to gl_Position(vec4(0)) to degenerate ALL triangles, but still no difference... Is there any flag to

Compiling a shader and linking fail but verification of the shader succeeds

ⅰ亾dé卋堺 提交于 2019-12-11 01:17:16
问题 I'm rather confused about what my shaders are doing. I have a shader class which wraps the opengl parts of the shading for me. I build my application in code::blocks and run it, the compile fase fails, the linking stage fails but the verification with GL_VALIDATE_STATUS succeeds and the shader actually works. When I run it outside the codeblocks IDE the compile and linking stage succeeds and so does verification. When run in the IDE the ProgramLog and InfoLog are empty not even warnings but

GLSL textureCube and texture2D in same shader

让人想犯罪 __ 提交于 2019-12-11 01:12:40
问题 I can't seem to be able to have both texture2D() and textureCube() in one shader. When I do, nothing shows up and there is no error. I tried this both with my own shader loader and the Apple GLSL shader builder and the same thing happens. It happens even if I have textureCube() in the vertex shader and texture2D() in the fragment. They seem to work fine by themselves, but as soon as they're called together, no matter in which order, nothing shows up. 回答1: You need to bind both textures as

OpenGL / GLSL - Using buffer objects for uniform array values

半城伤御伤魂 提交于 2019-12-11 00:44:42
问题 My (fragment) shader has a uniform array containing 12 structs: struct LightSource { vec3 position; vec4 color; float dist; }; uniform LightSource lightSources[12]; In my program I have 12 buffer objects that each contain the data for one light source. (They need to be seperate buffers.) How can I bind these buffers to their respective position inside the shader? I'm not even sure how to retrieve the location of the array. glGetUniformLocation(program,"lightSources"); glGetUniformLocation

Transform to NDC, calculate and transform back to worldspace

谁都会走 提交于 2019-12-11 00:06:36
问题 I have a problem moving world coordinates to ndc coordinates than calculate something with it and move it back inside the shader. The Code looks like that: vec3 testFunc(vec3 pos, vec3 dir){ //pos and dir are in worldspace, convert to NDC vec4 NDC_dir = MVP * vec4(dir,0); vec4 NDC_pos = MVP * vec4(pos,1); NDC_dir /= NDC_dir.w; NDC_pos /= NDC_pos.w; //... do some caclulations => get newPos in NDC //Transform newPos back to worldspace vec4 WS_newPos = inverse(MVP) * vec4(newPos,1); return WS

Why does this GLSL shader work fine with a GeForce but flickers strangely on an Intel HD 4000?

天涯浪子 提交于 2019-12-10 23:49:18
问题 Using OpenGL 3.3 core profile, I'm rendering a full-screen "quad" (as a single oversized triangle) via gl.DrawArrays(gl.TRIANGLES, 0, 3) with the following shaders. Vertex shader: #version 330 core #line 1 vec4 vx_Quad_gl_Position () { const float extent = 3; const vec2 pos[3] = vec2[](vec2(-1, -1), vec2(extent, -1), vec2(-1, extent)); return vec4(pos[gl_VertexID], 0, 1); } void main () { gl_Position = vx_Quad_gl_Position(); } Fragment shader: #version 330 core #line 1 out vec3 out_Color;