glsl

GLSL memoryBarrierShared() usefulness?

匿名 (未验证) 提交于 2019-12-03 01:38:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am wondering about the usefulness of memoryBarrierShared. Indeed, when I am looking the documentation for barrier function : I read : For any given static instance of barrier in a compute shader, all invocations within a single work group must enter it before any are allowed to continue beyond it. This ensures that values written by one invocation prior to a given static instance of barrier can be safely read by other invocations after their call to the same static instance of barrier. Because invocations may execute in undefined order

0:1(10): error: GLSL 3.30 is not supported. ubuntu 18.04 c++

匿名 (未验证) 提交于 2019-12-03 01:36:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to draw a triangle on the window with openGL GLFW library. Here is my complete code #include <GL/glew.h> #include <GLFW/glfw3.h> #include <iostream> using namespace std; static unsigned int compileShader ( unsigned int type, const string& source ){ unsigned int id = glCreateShader ( type ); const char* src = source.c_str(); glShaderSource ( id, 1, &src, nullptr ); glCompileShader ( id ); int result = 0; glGetShaderiv ( id, GL_COMPILE_STATUS, &result ); if ( result == GL_FALSE ){ int length = 0; glGetShaderiv ( id, GL_INFO_LOG

Efficient Bicubic filtering code in GLSL?

风流意气都作罢 提交于 2019-12-03 01:13:54
问题 I'm wondering if anyone has complete, working, and efficient code to do bicubic texture filtering in glsl. There is this: http://www.codeproject.com/Articles/236394/Bi-Cubic-and-Bi-Linear-Interpolation-with-GLSL or https://github.com/visionworkbench/visionworkbench/blob/master/src/vw/GPU/Shaders/Interp/interpolation-bicubic.glsl but both do 16 texture reads where only 4 are necessary: https://groups.google.com/forum/#!topic/comp.graphics.api.opengl/kqrujgJfTxo However the method above uses a

From RGB to HSV in OpenGL GLSL

匿名 (未验证) 提交于 2019-12-03 01:06:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I need to pass from RGB color space to HSV .. I searched in internet and found two different implementations, but those give me different results: A: precision mediump float; vec3 rgb2hsv(float r, float g, float b) { float h = 0.0; float s = 0.0; float v = 0.0; float min = min( min(r, g), b ); float max = max( max(r, g), b ); v = max; // v float delta = max - min; if( max != 0.0 ) s = delta / max; // s else { // r = g = b = 0 // s = 0, v is undefined s = 0.0; h = -1.0; return vec3(h, s, v); } if( r == max ) h = ( g - b ) / delta; // between

How to use bit operations in GLSL 1.3 with OpenGL 2.1

匿名 (未验证) 提交于 2019-12-03 00:53:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to write a shader that uses many bit operations. In fact they are supported since glsl 1.30, but I'm only on OpenGL 2.1. Is there any way to use bit operations with my OpenGL version? 回答1: All SM3 compatible (~OpenGL 2.1) hardware supports limited integer functionality. This is usually done by emulating integers with floats and does not include bit operations. For bit operations, you need either GLSL 1.3 or EXT_gpu_shader4 . If the reason that you have only OpenGL 2.1 is that your driver is somewhat outdated, you may be lucky to

How does the fragment shader know what variable to use for the color of a pixel?

帅比萌擦擦* 提交于 2019-12-03 00:09:04
问题 I see a lot of different fragment shaders, #version 130 out vec4 flatColor; void main(void) { flatColor = vec4(0.0,1.0,0.0,0.5); } And they all use a different variable for the "out color" (in this case flatColor ). So how does OpenGL know what you're trying to do? I'm guessing this works because flatColor is the only variable defined as out , but you're allowed to add more out variables aren't you? Or would that just crash? Actually, as a test, I just ran this: #version 330 in vec2 TexCoord0

THREE.js blur the frame buffer

妖精的绣舞 提交于 2019-12-02 21:30:21
I need to blur the frame buffer and I don't know how to get the frame buffer using THREE.js. I want to blur the whole frame buffer rather than blur each textures in the scene. So I guess I should read the frame buffer and then blur, rather than doing this in shaders. Here's what I have tried: Call when init: var renderTarget = new THREE.WebGLRenderTarget(512, 512, { wrapS: THREE.RepeatWrapping, wrapT: THREE.RepeatWrapping, minFilter: THREE.NearestFilter, magFilter: THREE.NearestFilter, format: THREE.RGBAFormat, type: THREE.FloatType, stencilBuffer: false, depthBuffer: true }); renderTarget

Textures in OpenGL ES 2.0 for Android

流过昼夜 提交于 2019-12-02 21:15:30
I'm new to OpenGL and I'm teaching myself by making a 2D game for Android with ES 2.0. I am starting off by creating a "Sprite" class that creates a plane and renders a texture to it. To practice, I have two Sprite objects that are drawn alternating in the same place. I got this much working fine and well with ES 1.0, but now that I've switched to 2.0, I am getting a black screen with no errors . I'm exhausted trying to figure out what I'm doing wrong, but I have a strong feeling it has to do with my shaders. I'm going to dump all the relevant code here and hopefully somebody can give me an

How does the default GLSL shaders look like? for version 330

一笑奈何 提交于 2019-12-02 20:15:53
What do the default vertex, fragment and geometry GLSL shaders look like for version #330? I'll be using #version 330 GLSL Version 3.30 NVIDIA via Cg compiler, because that is what my graphics card supports. With default shaders, I mean shaders that do the same exact thing as the graphics card would do when the shader program is turned off. I can't find a good example for #version 330 . Been googling all day. Not sure if the term default shader is called something else like trivial or basic and if that is why I can't find it. Any recommendations for a book with version 330 or link to an easy

OpenGL ES write depth data to color

a 夏天 提交于 2019-12-02 19:21:07
问题 I'm trying to implement DepthBuffer-like functionality using OpenGL ES on Android. In other words I'm trying to get the 3D point on surface that is rendered on point [x, y] on the user device. In order to make that I need to be able to read the distance of the fragment at that given point. Answer in different circumstances: When using normal OpenGL you could achieve this by creating FrameBuffer and then attach either RenderBuffer or Texture with depth component to it. Both of those approaches