glsl

GLSL Problem: Multiple shaders in one program

本秂侑毒 提交于 2019-11-30 04:59:39
I must have misunderstood something with shaders: I thought that as you can attach multiple shaders to one program, you'd be able to simply attach more than one fragment shader, as an example: A crate texture rendered with a color modulation and refraction. But apparently this is not the case, as you can have only one main function per program. How can I work around the main function limit and allow for any dynamic combination of multiple fragment shaders which are in the same program and called after each other? You can have a set of entry points pre-defined. Suppose you have a limited number

How does a GLSL sampler determine the minification, and thus the mipmap level, of a texture?

妖精的绣舞 提交于 2019-11-30 04:20:44
问题 I am working with OpenGL ES (via WebGL), but I think this question is applicable to the full OpenGL profile as well. Suppose I create an OpenGL texture with full mipmap levels, and I set its TEXTURE_MIN_FILTER to NEAREST_MIPMAP_NEAREST. Also suppose that I have a fragment shader that samples this texture. The mipmap level is chosen based on the degree of minification of the texture, but how is the degree of minification chosen? In my case, I am synthesizing (inside the shader) the texture

Render water-paint in iOS

蓝咒 提交于 2019-11-30 03:48:33
I have been working with OpenGL in iOS, and setting the colors with glColor4f(r,g,b,a) and then drawing my own color on a white UIImageView . I basically have a brush, which is then moved around my user's touch, and then it paints the color onto the canvas. But this color needs to be water paint (like smudged color) Does anyone understand/knows how to get a water color like this app does, and how the background UIImageView has a texture on it? https://itunes.apple.com/us/app/hello-watercolor/id539414526?mt=8 or checkout water paint in this. http://www.fiftythree.com/paper I created a bounty on

What versions of GLSL can I use in OpenGL ES 2.0?

五迷三道 提交于 2019-11-30 03:30:19
I can't seem to find a clear answer on this, despite hours of googling. Can someone just tell me what's going on? I get errors saying things like, "version 140 is not supported." Is this my device (Kindle Fire) or GL ES 2.0? Do I need to add libraries or anything? you actually don't have to add any libraries, 140 is far too new for Kindle Fire. Either remove the version specification or decrement it until the shader compiles. You may need to fix some other errors in the shader as the individual versions of the language do have some differences. You can also query GL_SHADING_LANGUAGE_VERSION

Is it possible for a vertex attribute to be an array in GLSL-ES 2.0?

微笑、不失礼 提交于 2019-11-30 03:25:11
问题 In GLSL-ES it's possible to have arrays. For example, the GLSL ES Specification gives the following example of a uniform variable that's an array: uniform vec4 lightPosition[4]; Is it possible to have vertex attributes that are arrays? In other words, is the following legal according to the spec? attribute vec4 foo[3]; // three vec4s per vertex Is the answer (either yes or no) explicitly mentioned anywhere in the GLSL ES Specification? (I can't find it, but I haven't read every line of the

What does (gl_FragCoord.z / gl_FragCoord.w) represent?

我只是一个虾纸丫 提交于 2019-11-30 02:28:38
I want actual world space distance, and I get the feeling from experimentation that (gl_FragCoord.z / gl_FragCoord.w) is the depth in world space? But I'm not too sure. EDIT I've just found where I had originally located this snippet of code . Apparently it is the actual depth from the camera? This was asked (by the same person) and answered elsewhere . I'm paraphrasing and embellishing the answer here: As stated in section 15.2.2 of the OpenGL 4.3 core profile specification (PDF) , gl_FragCoord.w is 1 / clip.w , where clip.w is the W component of the clip-space position (ie: what you wrote to

Convention of faces in OpenGL cubemapping

女生的网名这么多〃 提交于 2019-11-30 01:59:27
What is the convention OpenGL follows for cubemaps? I followed this convention (found on a website) and used the correspondent GLenum to specify the 6 faces GL_TEXTURE_CUBE_MAP_POSITIVE_X_EXT but I always get wrong Y, so I have to invert Positive Y with Negative Y face. Why? ________ | | | pos y | | | _______|________|_________________ | | | | | | neg x | pos z | pos x | neg z | | | | | | |_______|________|________|________| | | | | | neg y | |________| but I always get wrong Y, so I have to invert Positive Y with Negative Y face. Why? Ah, yes, this is one of the most odd things about Cube

GPGPU programming with OpenGL ES 2.0

为君一笑 提交于 2019-11-30 00:50:56
I am trying to do some image processing on the GPU, e.g. median, blur, brightness, etc. The general idea is to do something like this framework from GPU Gems 1. I am able to write the GLSL fragment shader for processing the pixels as I've been trying out different things in an effect designer app. I am not sure however how I should do the other part of the task. That is, I'd like to be working on the image in image coords and then outputting the result to a texture. I am aware of the gl_FragCoords variable. As far as I understand it it goes like that: I need to set up a view (an orthographic

Applying part of a texture (sprite sheet / texture map) to a point sprite in iOS OpenGL ES 2.0

时间秒杀一切 提交于 2019-11-30 00:36:41
It seems this should be easy but I'm having a lot of difficulty using part of a texture with a point sprite. I have googled around extensively and turned up various answers but none of these deal with the specific issue I'm having. What I've learned so far: Basics of point sprite drawing How to deal with point sprites rendering as solid squares How to alter orientation of a point sprite How to use multiple textures with a point sprite , getting closer here.. That point sprites + sprite sheets has been done before, but is only possible in OpenGL ES 2.0 (not 1.0) Here is a diagram of what I'm

GLSL: How to get pixel x,y,z world position?

ⅰ亾dé卋堺 提交于 2019-11-29 23:02:19
I want to adjust the colors depending on which xyz position they are in the world. I tried this in my fragment shader: varying vec4 verpos; void main(){ vec4 c; c.x = verpos.x; c.y = verpos.y; c.z = verpos.z; c.w = 1.0; gl_FragColor = c; } but it seems that the colors change depending on my camera angle/position, how do i make the coords independent from my camera position/angle? Heres my vertex shader: varying vec4 verpos; void main(){ gl_Position = ftransform(); verpos = gl_ModelViewMatrix*gl_Vertex; } Edit2: changed title, so i want world coords, not screen coords! Edit3: added my full code