shader

What's the origin of this GLSL rand() one-liner?

≯℡__Kan透↙ 提交于 2019-11-29 19:04:24
I've seen this pseudo-random number generator for use in shaders referred to here and there around the web : float rand(vec2 co){ return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453); } It's variously called "canonical", or "a one-liner I found on the web somewhere". What's the origin of this function? Are the constant values as arbitrary as they seem or is there some art to their selection? Is there any discussion of the merits of this function? EDIT: The oldest reference to this function that I've come across is this archive from Feb '08 , the original page now being gone from

Using QPainter over OpenGL in QGLWidget when using shaders

让人想犯罪 __ 提交于 2019-11-29 18:22:53
问题 Many of you Qt (4.6 specifically) users will be familiar with the Overpainting example supplied in the OpenGL tutorials, I'm trying to do something very similar but using shaders for the pure OpenGL data, instead of the old fixed-function pipeline. // Set background and state. makeCurrent(); qglClearColor( bgColour_ ); glEnable( GL_DEPTH_TEST ); glPolygonMode( GL_FRONT_AND_BACK, GL_LINE ); if ( smoothLines_ ) { glEnable( GL_BLEND ); glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );

Only first Compute Shader array element appears updated

你。 提交于 2019-11-29 17:15:03
Trying to send an array of integer to a compute shader, sets an arbitrary value to each integer and then reads back on CPU/HOST. The problem is that only the first element of my array gets updated. My array is initialized with all elements = 5 in the CPU, then I try to sets all the values to 2 in the Compute Shader: C++ Code: this->numOfElements = std::vector<int> numOfElements; //num of elements for each voxel //Set the reset grid program as current program glUseProgram(this->resetGridProgHandle); //Binds and fill the buffer glBindBuffer(GL_SHADER_STORAGE_BUFFER, this->counterBufferHandle);

How does alpha blending work, mathematically, pixel-by-pixel?

回眸只為那壹抹淺笑 提交于 2019-11-29 16:54:59
Seems like it's not as simple as RGB1*A1 + RGB2*A2...how are values clipped? Weighted? Etc. And is this a context-dependent question? Are there different algorithms, that produce different results? Or one standard implementation? I'm particularly interested in OpenGL-specific answers, but context from other environments is useful too. I don't know about OpenGL, but one pixel of opacity A is usually drawn on another pixel like so: result.r = background.r * (1 - A) + foreground.r * A result.g = background.g * (1 - A) + foreground.g * A result.b = background.b * (1 - A) + foreground.b * A Repeat

Compute normals from displacement map in three.js r.58?

人走茶凉 提交于 2019-11-29 15:39:36
问题 I'm using the normal shader in three.js r.58, which I understand requires a normal map. However, I'm using a dynamic displacement map, so a pre-computed normal map won't work in this situation. All the examples I've found of lit displacement maps either use flat shading or pre-computed normal maps. Is it possible to calculate the normals dynamically based on the displaced vertices instead? Edit: I've posted a demo of a sphere with a displacement map showing flat normals: Here's a link to the

Texture lookup in vertex shader behaves differently on iPad device vs iPad simulator - OpenGL ES 2.0

♀尐吖头ヾ 提交于 2019-11-29 14:08:20
问题 I have a vertex shader in which I do a texture lookup to determine gl_Position. I am using this as part of a GPU particle simulation system, where particle positions are stored in a texture. It seems that: vec4 textureValue = texture2D(dataTexture, vec2(1.0, 1.0)); behaves differently on the simulator than the iPad device. On the simulator, the texture lookup succeeds (the value at that location is 0.5, 0.5) and my particle appears there. However, on the iPad itself the texture lookup is

Compute Shader (DX11)

笑着哭i 提交于 2019-11-29 10:45:00
DX11中提供vs和ps后的又一支持直接gpu计算的Compute Shader,可以看得出物理加速不再是只属于n卡,而且gpu通用计算的道路会越发成熟,下面做下标记,下次翻译 // from DX11 SDK and gamedev Compute shaders are basically the same as any other shader ... pixel shader for example. Just like the pixel shader is envoked for each pixel, the compute shader is envoked for each "thread". A thread is a generic and independent execution entity that doesn't really require any sort of geometry. All you have to do now is to dispatch a number of threads, and your shader will be executed for each of these threads. In DirectX, these threads are organized into "groups". You have X

three.js webgl custom shader sharing texture with new offset

一世执手 提交于 2019-11-29 10:34:58
I am splitting a texture 1024 x 1024 over 32x32 tiles * 32, Im not sure if its possible to share the texture with an offset or would i need to create a new texture for each tile with the offset.. to create the offset i am using a uniform value = 32 * i and updating the uniform through each loop instance of creating tile, all the tiles seem to be the same offset? as basically i wanting an image to appear like its one image not broken up into little tiles.But the current out-put is the same x,y-offset on all 32 tiles..Im using the vertex-shader with three.js r71... Would i need to create a new

Calculate clipspace.w from clipspace.xyz and (inv) projection matrix

喜你入骨 提交于 2019-11-29 10:34:46
问题 I'm using a logarithmic depth algorithmic which results in someFunc(clipspace.z) being written to the depth buffer and no implicit perspective divide . I'm doing RTT / postprocessing so later on in a fragment shader I want to recompute eyespace.xyz, given ndc.xy (from the fragment coordinates) and clipspace.z (from someFuncInv() on the value stored in the depth buffer). Note that I do not have clipspace.w, and my stored value is not clipspace.z / clipspace.w (as it would be when using fixed

Images and mask in OpenGL ES 2.0

不想你离开。 提交于 2019-11-29 09:30:03
I'm learning OpenGL ES 2.0 and I'd like to create an App to better understand how it works. The App has a set of filter that the user can apply on images (I know, nothing new :P). One of this filter takes two images and a mask and it mixes the two images showing them through the mask (here an image to better explain what I want to obtain) At the moment I'm really confused and I don't know where to start to create this effect. I can't understand wether I have to work with multiple textures and multiple FrameBuffers or I can just work with a single shader. Do you have any hint to help me in