compute-shader

SSBO as bigger UBO?

落花浮王杯 提交于 2020-01-20 04:56:06
问题 i am currently doing so rendering in OpenGL 4.3 using UBOs to store all my constant data on the GPU. (Stuff like material descriptions, matrices, ...). It works however the small size of UBO (64kB on my implementation) forces me to switch buffers numerous times slowing rendering, i am looking for similar a way to store a few MB. After a little research i saw that SSBO allow exactly that but also have unwanted 'features' : they can be written from the shader and might be slower to read. Is

Do DirectX Compute Shaders support 2D arrays in shared memory?

…衆ロ難τιáo~ 提交于 2020-01-15 10:18:06
问题 I want to use groupshared memory in a DirectX Compute Shader to reduce global memory bandwidth and hopefully improve performance. My input data is a Texture2D and I can access it using 2D indexing like so: Input[threadID.xy] I would like to have a 2D array of shared memory for caching portions of the input data, so I tried the obvious: groupshared float SharedInput[32, 32]; It won't compile. The error message says syntax error: unexpected token ',' . Is there any way to have a 2D array of

DirectX 11 - Compute shader: Writing to an output resource

十年热恋 提交于 2020-01-12 02:21:13
问题 I've just started using the Compute shader stage in DirectX 11 and encountered some unwanted behaviour when writing to an output resource in the Compute shader. I seem to get only zeroes as output which, to my understanding, means that out-of-bound reads has been performed in the Compute shader. (Out-of-bound writes results in no-ops) Creating the Compute shader components Input resources First I create an ID3D11Buffer* for input data. This is passed as a resource when creating the SRV used

Compute Shader not writing to buffer

本秂侑毒 提交于 2020-01-07 02:36:31
问题 I'm looking for help calling a compute shader from Qt using QOpenGLFunctions_4_3_Core OpenGL functions. Specifically, my call to glDispatchCompute(1024, 1, 1); does not seem to have any effect on the buffer bound to it. How do you bind a buffer to a compute shader in QT such that the results of the shader can be read back to the C++ ? I create my program and bind it with (Squircle.cpp): computeProgram_ = new QOpenGLShaderProgram(); computeProgram_->addShaderFromSourceFile(QOpenGLShader:

how to calculate the number of specified colored pixels using GLSL?

人走茶凉 提交于 2020-01-04 09:15:20
问题 I have a grayscale texture (8000*8000) , the value of each pixel is an ID (actually, this ID is the ID of triangle to which the fragment belongs, I want to using this method to calculate how many triangles and which triangles are visible in my scene). now I need to count how many unique IDs there are and what are them. I want to implement this with GLSL and minimize the data transfer between GPU RAM and RAM. The initial idea I come up with is to use a shader storage buffer, bind it to an

Can one fragment access all texture pixel values in WebGL GLSL? (Not just it's own TexCoord)

心已入冬 提交于 2020-01-03 00:56:33
问题 Let's pretend I'm making a compute shader using WebGL and GLSL. In this shader, each fragment (or pixel) would like to look at every pixel on a texture, then decide on it's own color. Normally a fragment samples it's provided texture coordinate (UV value) from a few textures, but I want to sample effectively all UV values from a single texture for a single fragment. Is this possible? 回答1: EDIT: I was able to sample from each pixel in a 128x128 texture, but moving to 256x256 causes Chrome to

D3D12 Use backbuffer surface as unordered access view (UAV)

风流意气都作罢 提交于 2019-12-31 04:15:10
问题 Im making a simple raytracer for a schoolproject were a compute shader is supposed to be used to shade a triangle or some other primitive. For this I'd like to write to a backbuffer-surface directly in the compute shader, to then present the results imideatly. I know for certain that this is possible in DX11 though i can't seem to get it to work in DX12. I couldn't gather that much information about this, but i found this gamedev thread discussing the exact same problem I try to figure out

imageStore in compute shader to depth texture

自古美人都是妖i 提交于 2019-12-25 08:17:37
问题 I can't for the life of me workout how to write to a depth texture using imageStore within a compute shader, I've checked what I'm doing against several examples (e.g. this and this), but I still can't spot the fault. I can write to the texture as a Framebuffer, and when calling glTexImage2D() , but for some reason executing this compute shader doesn't affect the named texture (which i'm checking via rendering to screen). You can skip straight to the below accepted answer if the above applies

OpenGL Compute shader sync different work groups

时光毁灭记忆、已成空白 提交于 2019-12-24 01:19:52
问题 If you have a compute shader where different work groups in the same dispatch are put in a continuous loop and you want to signal them all to exit said loop by any of them setting a flag. Is this actually possible? I've tried using a flag in an SSBO marked both coherent and volatile to trigger their exit. Which sometimes doesn't work on AMD it seems. When one of the work groups wants to trigger all of them to exit I simply set the flag from zero to one directly (as it doesn't matter as long

Compute shader not writing to SSBO

a 夏天 提交于 2019-12-23 02:36:07
问题 I'm writing a simple test compute shader that writes a value of 5.0 to every element in a buffer. The buffer's values are initialized to -1, so that I know whether or not creating the buffer and reading the buffer are the problem. class ComputeShaderWindow : public QOpenGLWindow { public: void initializeGL() { // Create the opengl functions object gl = context()->versionFunctions<QOpenGLFunctions_4_3_Core>(); m_compute_program = new QOpenGLShaderProgram(this); auto compute_shader_s = fs: