glsl

GLSL - Using custom output attribute instead of gl_Position

让人想犯罪 __ 提交于 2019-11-27 13:46:19
I am currently learning OpenGL with shaders (3.3). There is one thing i can't seem to work out though. I have read that using built-in variables like gl_Position and gl_FragCoords is deprecated in OpenGL 3+, therefore I wanted to use my own output variable. So instead of this: #version 330\n layout(location=0) in vec2 i_position; out vec4 o_position; void main() { gl_Position = vec4(i_position, 0.0, 1.0); }; I wrote this: #version 330\n layout(location=0) in vec2 i_position; out vec4 o_position; void main() { o_position = vec4(i_position, 0.0, 1.0); }; The shaders compile without problems in

Why do shaders have to be in html file for webgl program?

心不动则不痛 提交于 2019-11-27 12:06:29
I have seen the following question where someone asked how to remove shaders from html: WebGL - is there an alternative to embedding shaders in HTML? There are elaborate workarounds to load in a file containing the shader suggested in the answers to the question. In the tutorial I saw, the shader code is embedded directly in the html. The javascript code refers to it using getElementById. But it's ugly embedding the shader directly in the html for many reasons. Why can't I just refer to it externally using the src= attribute? <script type="x-shader/x-fragment" id="shader-fs" src="util/fs"><

Convention of faces in OpenGL cubemapping

送分小仙女□ 提交于 2019-11-27 11:42:39
问题 What is the convention OpenGL follows for cubemaps? I followed this convention (found on a website) and used the correspondent GLenum to specify the 6 faces GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT but I always get wrong Y, so I have to invert Positive Y with Negative Y face. Why? ________ | | | pos y | | | _______|________|_________________ | | | | | | neg x | pos z | pos x | neg z | | | | | | |_______|________|________|________| | | | | | neg y | |________| 回答1: but I always get wrong Y, so I

Should I calculate matrices on the GPU or on the CPU?

£可爱£侵袭症+ 提交于 2019-11-27 11:28:38
问题 Should I prefer to calculate matrices on the CPU or GPU? Let's say I have the following matrices P * V * M , should I calculate them on the CPU so that I can send the final matrix to the GPU (GLSL) or should I send those three matrices separately to the GPU so that GLSL can calculate the final matrix? I mean in this case GLSL would have to calculate the MVP matrix for every vertex, so it is probably faster to precompute it on the CPU. But let's say that GLSL only has to calculate he MVP

What can cause glDrawArrays to generate a GL_INVALID_OPERATION error?

喜欢而已 提交于 2019-11-27 11:04:31
问题 I've been attempting to write a two-pass GPU implementation of the Marching Cubes algorithm, similar to the one detailed in the first chapter of GPU Gems 3, using OpenGL and GLSL. However, the call to glDrawArrays in my first pass consistently fails with a GL_INVALID_OPERATION . I've looked up all the documentation I can find, and found these conditions under which glDrawArrays can throw that error: GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to an enabled

How can I improve this WebGL / GLSL image downsampling shader

吃可爱长大的小学妹 提交于 2019-11-27 10:59:55
问题 I am using WebGL to resize images clientside very quickly within an app I am working on. I have written a GLSL shader that performs simple bilinear filtering on the images that I am downsizing. It works fine for the most part but there are many occasions where the resize is huge e.g. from a 2048x2048 image down to 110x110 in order to generate a thumbnail. In these instances the quality is poor and far too blurry. My current GLSL shader is as follows: uniform float textureSizeWidth;\ uniform

How can I pass multiple textures to a single shader?

安稳与你 提交于 2019-11-27 10:54:33
问题 I am using freeglut, GLEW and DevIL to render a textured teapot using a vertex and fragment shader. This is all working fine in OpenGL 2.0 and GLSL 1.2 on Ubuntu 14.04. Now, I want to apply a bump map to the teapot. My lecturer evidently doesn't brew his own tea, and so doesn't know they're supposed to be smooth. Anyway, I found a nice-looking tutorial on old-school bump mapping that includes a fragment shader that begins: uniform sampler2D DecalTex; //The texture uniform sampler2D BumpTex; /

What's the best way to draw a fullscreen quad in OpenGL 3.2?

喜你入骨 提交于 2019-11-27 10:54:10
问题 I'm doing ray casting in the fragment shader. I can think of a couple ways to draw a fullscreen quad for this purpose. Either draw a quad in clip space with the projection matrix set to the identity matrix, or use the geometry shader to turn a point into a triangle strip. The former uses immediate mode, deprecated in OpenGL 3.2. The latter I use out of novelty, but it still uses immediate mode to draw a point. 回答1: You can send two triangles creating a quad, with their vertex attributes set

How can I improve the performance of my custom OpenGL ES 2.0 depth texture generation?

安稳与你 提交于 2019-11-27 10:26:07
I have an open source iOS application that uses custom OpenGL ES 2.0 shaders to display 3-D representations of molecular structures. It does this by using procedurally generated sphere and cylinder impostors drawn over rectangles, instead of these same shapes built using lots of vertices. The downside to this approach is that the depth values for each fragment of these impostor objects needs to be calculated in a fragment shader, to be used when objects overlap. Unfortunately, OpenGL ES 2.0 does not let you write to gl_FragDepth , so I've needed to output these values to a custom depth texture

How to design a simple GLSL wrapper for shader use

喜你入骨 提交于 2019-11-27 10:23:58
问题 UPDATE: Because I needed something right away, I've created a simple shader wrapper that does the sort of thing I need. You can find it here: ShaderManager on GitHub. Note that it's designed for Objective-C / iOS, so may not be useful to everyone. If you have any suggestions for design improvements, please let me know! Original Problem: I'm new to using GLSL shaders. I'm familiar enough with the GLSL language and the OpenGL interface, but I'm having trouble designing a simple API through