shader

Jogl Shader programming

空扰寡人 提交于 2019-11-30 13:03:24
问题 I just started Shader programming(GLSL) and created a few with RenderMonkey. Now I want to use this Shaders in my java code. Are there any simple examples of how I do that? 回答1: I have found a very simple example int v = gl.glCreateShader(GL.GL_VERTEX_SHADER); int f = gl.glCreateShader(GL.GL_FRAGMENT_SHADER); BufferedReader brv = new BufferedReader(new FileReader("vertexshader.glsl")); String vsrc = ""; String line; while ((line=brv.readLine()) != null) { vsrc += line + "\n"; } gl

What is the most efficient way to implement a convolution filter within a pixel shader?

血红的双手。 提交于 2019-11-30 10:48:49
问题 Implementing convolution in a pixel shader is somewhat costly as to the very high number of texture fetches. A direct way of implementing a convolution filter is to make N x N lookups per fragment using two for cycles per fragment. A simple calculation says that a 1024x1024 image blurred with a 4x4 Gaussian kernel would need 1024 x 1024 x 4 x 4 = 16M lookups. What can one do about this? Can one use some optimization that would need less lookups? I am not interested in kernel-specific

Compute normals from displacement map in three.js r.58?

好久不见. 提交于 2019-11-30 10:22:08
I'm using the normal shader in three.js r.58, which I understand requires a normal map . However, I'm using a dynamic displacement map, so a pre-computed normal map won't work in this situation. All the examples I've found of lit displacement maps either use flat shading or pre-computed normal maps. Is it possible to calculate the normals dynamically based on the displaced vertices instead? Edit: I've posted a demo of a sphere with a displacement map showing flat normals : Here's a link to the github repo with all of my examples illustrating this problem, and the solutions I eventually found:

Only first Compute Shader array element appears updated

余生颓废 提交于 2019-11-30 09:51:38
问题 Trying to send an array of integer to a compute shader, sets an arbitrary value to each integer and then reads back on CPU/HOST. The problem is that only the first element of my array gets updated. My array is initialized with all elements = 5 in the CPU, then I try to sets all the values to 2 in the Compute Shader: C++ Code: this->numOfElements = std::vector<int> numOfElements; //num of elements for each voxel //Set the reset grid program as current program glUseProgram(this-

Texture lookup in vertex shader behaves differently on iPad device vs iPad simulator - OpenGL ES 2.0

人盡茶涼 提交于 2019-11-30 09:45:15
I have a vertex shader in which I do a texture lookup to determine gl_Position. I am using this as part of a GPU particle simulation system, where particle positions are stored in a texture. It seems that: vec4 textureValue = texture2D(dataTexture, vec2(1.0, 1.0)); behaves differently on the simulator than the iPad device. On the simulator, the texture lookup succeeds (the value at that location is 0.5, 0.5) and my particle appears there. However, on the iPad itself the texture lookup is constantly returning 0.0, 0.0. I have tried both textures of the format GL_FLOAT and GL_UNSIGNED_BYTE. Has

What kind of blurs can be implemented in pixel shaders?

巧了我就是萌 提交于 2019-11-30 08:34:38
问题 Gaussian, box, radial, directional, motion blur, zoom blur, etc. I read that Gaussian blur can be broken down in passes that could be implemented in pixel shaders, but couldn't find any samples. Is it right to assume that any effect that concerns itself with pixels other than itself, can't be implemented in pixel shaders? 回答1: You can implement everything, as long you are able to pass information to the shader. The trick, in this cases, is to perform a multiple pass rendering. The final

How to Solve Rendering Artifact in Blinn/Loop's Resolution Independent Curve Rendering?

落花浮王杯 提交于 2019-11-30 07:39:20
In implementing Blinn/Loop's algorithm on curve rendering, I realize there is a special case on Loop Curve Type. As described in their paper (subsection 4.4, page 6-7), they said the curve should be divided into two but I'm really confused how to obtain the intersection point. Here's my rendering result: As stated in the paper, this artifact occurs when either td/sd or te/se lie in between value [0, 1]. My source code: ... case CURVE_TYPE_LOOP: td = d2 + sqrt(4.0 * d1 * d3 - 3.0 * d2 *d2); sd = 2.0 * d1; te = d2 - sqrt(4.0 * d1 * d3 - 3.0 * d2 * d2); se = 2.0 * d1; if((td / sd > 0.0 && td/ sd

Help with Pixel Shader effect for brightness and contrast

一笑奈何 提交于 2019-11-30 07:06:49
What is a simple pixel shader script effect to apply brightness and contrast? I found this one, but it doesn't seem to be correct: sampler2D input : register(s0); float brightness : register(c0); float contrast : register(c1); float4 main(float2 uv : TEXCOORD) : COLOR { float4 color = tex2D(input, uv); float4 result = color; result = color + brightness; result = result * (1.0+contrast)/1.0; return result; } thanks! Is this what you are looking for? float Brightness : register(C0); float Contrast : register(C1); sampler2D Texture1Sampler : register(S0); float4 main(float2 uv : TEXCOORD) : COLOR

What is the relationship between gl_Color and gl_FrontColor in both vertex and fragment shaders

扶醉桌前 提交于 2019-11-30 06:45:40
I have pass-through vertex and fragment shaders. vertex shader void main(void) { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } fragment shader void main(void) { gl_FragColor = gl_Color; } Those produce empty rendering (black not background color like glClearBuffer does). If I modify the vertex shader to set the gl_FrontColor to gl_Color it does render untouched OpenGl buffer ... with is the expected behavior of pass-through shaders. void main(void) { gl_FrontColor = gl_Color; //Added line gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = gl

【Shader笔记】【NPR】卡通渲染-轮廓线的渲染

白昼怎懂夜的黑 提交于 2019-11-30 05:45:34
目录 写在前面 【NPR】轮廓线的渲染 Surface Angle Silhouette 参数&纹理控制 Procedural Geometry Silhouette z-bias和vertex normal的方法 写在前面 本文借鉴大佬的《【NPR】漫谈轮廓线的渲染》 地址:https://blog.csdn.net/candycat1992/article/details/45577749 【NPR】轮廓线的渲染 Surface Angle Silhouette 利用viewpoint和surface normal的点乘结果得到轮廓线信息,结果越接近0,说明离轮廓线越近。在实际应用中,我们通常使用一张一维纹理来模拟,即使用视角方向和顶点法向的点乘对该纹理进行采样。 使用了两种方法实现这种技术: 一种是使用一个参数_Outline来控制轮廓线的宽度 另一种方式是使用了一张一维纹理来控制 参数&纹理控制 使用纹理为: 使用纹理控制的轮廓效果很难控制,有的地方轮廓很宽,有些地方又捕捉不到。 这种方法的 优点 在于简单快速,可以在一个Pass里得到结果,而且还可以使用texture filtering对轮廓线进行抗锯齿。 不过也有很多的 局限性 ,比如只适用于某些模型,对于像cube这样的模型就会有问题。虽然我们可以使用变量来控制轮廓线的宽度(如果使用纹理的话就是纹理中黑色的宽度)