glsl

Problems using GLTexImage3D correctly

こ雲淡風輕ζ 提交于 2019-12-10 23:39:42
问题 This is how I give the Bitmaps to OpenGL. (C#) public static int Generate3DTexture( string[] names ) // file paths { //basically merging the images into one vertical column of images MagickImageCollection allimages = new MagickImageCollection(); foreach (string eachname in names) allimages.Add(new MagickImage(eachname)); MagickImage template = new MagickImage(MagickColor.FromRgba(0, 0, 0, 0), MasterSize.Width, MasterSize.Height * names.Length); Point drawpnt = new Point(0, 0); foreach

Fixing GLSL shaders for Nvidia and AMD

时间秒杀一切 提交于 2019-12-10 23:00:28
问题 I am having problems getting my GLSL shaders to work on both AMD and Nvidia hardware. I am not looking for help fixing a particular shader, but how to generally avoid getting these problems. Is it possible to check if a shader will compile on AMD/Nvidia drivers without running the application on a machine with the respective hardware and actually trying it? I know, in the end, testing is the only way to be sure, but during development I would like to at least avoid the obvious problems.

How to blur image using glsl shader without squares?

放肆的年华 提交于 2019-12-10 22:56:34
问题 I want to blur image with Gaussian blur algorithm. And I use the following shaders: Vertex shader attribute vec4 position; attribute vec4 inputTextureCoordinate; const int GAUSSIAN_SAMPLES = 9; uniform float texelWidthOffset; uniform float texelHeightOffset; varying vec2 textureCoordinate; varying vec2 blurCoordinates[GAUSSIAN_SAMPLES]; void main() { gl_Position = position; textureCoordinate = inputTextureCoordinate.xy; // Calculate the positions for the blur int multiplier = 0; vec2 blurStep

glGetUniformLocation return -1 on nvidia cards

妖精的绣舞 提交于 2019-12-10 22:26:09
问题 I've been having an issue running glGetUnfiormLocation calls. While building a project on school computers running ATI graphics cards, the program functions flawlessly. However, using the school computers running nvidia cards, the calls to glGetUniformLocation return -1. //c++ side glLinkProgram(ShaderIds[0]); ExitOnGLError("ERROR: Could not link the shader program"); ModelMatrixUniformLocaion = glGetUniformLocation(ShaderIds[0], "ModelMatrix"); ViewMatrixUniformLocation =

Parallax mapping issue in GLSL, Opengl

ぃ、小莉子 提交于 2019-12-10 21:26:25
问题 My Parallax mapping gives wrong results. I don't know what can be wrong. The "shadow" is in the wrong place. Light is pointing from the viewer and goes towards the cube. Shader program (based on dhpoware.com): [vert] varying vec3 lightDir; varying vec3 viewDir; attribute vec4 tangent; void main() { gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; gl_TexCoord[0] = gl_MultiTexCoord0; vec3 vertexPos = vec3(gl_ModelViewMatrix * gl_Vertex); vec3 n = normalize(gl_NormalMatrix * gl_Normal);

How does opengl decide which mip level to use?

大憨熊 提交于 2019-12-10 21:18:41
问题 The question is fairly self explanatory. I'm asking in terms of using texture coordinates that could have come from anywhere (a uniform, a varying, another texture fetch). Say for example I do a texture fetch on a mipmapped (or anisotropically filtered) texture, and I use the square of a varying which was set in the vertex shader. I assume that glsl cannot determine the derivative of an arbitrarily complex function like this, so how does it know which mip level to use? Thanks. 回答1: It is

GLSL shader compilation on linux

夙愿已清 提交于 2019-12-10 20:49:19
问题 I'm trying to get my cross-platform shader to compile a default shader which is nothing more than the basic shader program like so: Vertex program: void main() { //vec4 vertex = matModelView * a_vertex; gl_Position = ftransform();//matProj * vertex; gl_TexCoord[0] = gl_MultiTexCoord0; } Fragment program: uniform sampler2D colorMap; void main() { gl_FragColor = texture2D(colorMap, gl_TexCoord[0].st); } I am aware that the gl_ prefix are deprecated functions but this is just a default shader so

Texture sampling: Calculation of BIAS value from the LOD value

烂漫一生 提交于 2019-12-10 19:54:08
问题 In GL ES 2.0 Functions texture2DLod not available in the fragment Shader. I need to port the GLSL Shader. In GL ES 2.0 I can only use texture2D (sampler2D sampler, vec2 coord, float bias ) Tell me how to calculate the value of a bias equivalent to a known value LOD (level of detail)? //Example GLSL: float lod=u_lod; textureLod(sampler, (uInverseViewMatrix * vec4(r, 0.0)).xy, lod); //I need GL ES 2.0: float lod=u_lod; float bias=? <-----calc bias from lod texture2D(sampler,(uInverseViewMatrix

GLSL fragment shader syntax error

本小妞迷上赌 提交于 2019-12-10 19:26:07
问题 the following simple fragment shader code fails, leaving me with an uninformative message in the log: ERROR: 0:1: 'gl_Color' : syntax error syntax error void main() { vec4 myOutputColor(gl_Color); gl_FragColor = myOutputColor; } while the following one works: void main() { glFragColor = gl_Color; } This boggles my mind, as in Lighthouse3D's tutorial gl_Color is said to be a vec4. Why can't I assign it to another vec4? 回答1: Try normal assignment. Like this: void main() { vec4 myOutputColor =

Render to 3D texture webgl2

Deadly 提交于 2019-12-10 19:02:57
问题 I read here that it should be possible to render to a 3D texture in WebGL2 by using multiple render targets and attaching each layer of the 3d texture as a layer to the render target. However I can't seem to get it to work, no errors but the values of the texture doesn't change between the reads and is just empty. The texture has gl.RGBA8 as internal format, gl.RGBA as format and a size of 64x64x64 What am I doing wrong? This is what I tried so far (pseudo code): this.fbo = gl