OpenGL compute shader - strange results

本秂侑毒 提交于 2020-08-15 05:26:43

问题


I'm trying to implement a multipass compute shader for image processing. There is an input image and an output image in each pass. The next pass' input image is the previous ones' output.

This is the first time for me using compute shader in OpenGL so there may be some problems with my setup. I'm using OpenCV's Mat as the container to read/copy operations.

There are some parts of the code which isn't related to the problem so I didn't include. Some of these parts include loading the image or initializing the context.

Initialization:

//texture init
glGenTextures(1, &feedbackTexture_);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, feedbackTexture_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);

glGenTextures(1, &resultTexture_);
glActiveTexture(GL_TEXTURE0+1);
glBindTexture(GL_TEXTURE_2D, resultTexture_);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);

// shader init
computeShaderID = glCreateShader(GL_COMPUTE_SHADER);
glShaderSource(computeShaderID, 1, &computeShaderSourcePtr, &computeShaderLength);
glCompileShader(computeShaderID);
programID = glCreateProgram();
glAttachShader(programID, computeShaderID);
glLinkProgram(programID);
glDeleteShader(computeShaderID);

Shader Code:

//shader code (simple invert)
#version 430
layout (local_size_x = 1, local_size_y = 1) in;

layout (location = 0, binding = 0, /*format*/ rgba32f) uniform readonly image2D inImage;
layout (location = 1, binding = 1, /*format*/ rgba32f) uniform writeonly image2D resultImage;

uniform writeonly image2D image;

void main()
{
    // Acquire the coordinates to the texel we are to process.
    ivec2 texelCoords = ivec2(gl_GlobalInvocationID.xy);

    // Read the pixel from the first texture.
    vec4 pixel = imageLoad(inImage, texelCoords);

    pixel.rgb = 1. - pixel.rgb;

    imageStore(resultImage, texelCoords, pixel);
}

Usage:

cv::Mat image = loadImage().clone();
cv::Mat result(image.rows,image.cols,image.type());
// These get the appropriate enums used by glTexImage2D
GLenum internalformat = GLUtils::getMatOpenGLImageFormat(image);
GLenum format = GLUtils::getMatOpenGLFormat(image);
GLenum type = GLUtils::getMatOpenGLType(image);

int dispatchX = 1;
int dispatchY = 1;

for ( int i = 0; i < shaderPasses_.size(); ++i)
{
    // Update textures
    glBindTexture(GL_TEXTURE_2D, feedbackTexture_);
    glTexImage2D(GL_TEXTURE_2D, 0, internalformat, result.cols, result.rows, 0, format, type, result.data);
    glBindTexture(GL_TEXTURE_2D, resultTexture_);
    glTexImage2D(GL_TEXTURE_2D, 0, internalformat, image.cols, image.rows, 0, format, type, 0);
    glBindTexture(GL_TEXTURE_2D, 0);

    glClear(GL_COLOR_BUFFER_BIT);
    std::shared_ptr<Shader> shaderPtr = shaderPasses_[i];
    // Enable shader
    shaderPtr->enable();
    {
        // Bind textures
        // location = 0, binding = 0
        glUniform1i(0,0);
        // binding = 0
        glBindImageTexture(0, feedbackTexture_, 0, GL_FALSE, 0, GL_READ_ONLY, internalformat);
        // location = 1, binding = 1
        glUniform1i(1,1);
        // binding = 1
        glBindImageTexture(1, resultTexture_, 0, GL_FALSE, 0, GL_WRITE_ONLY, internalformat);

        // Dispatch rendering
        glDispatchCompute((GLuint)image.cols/dispatchX,(GLuint)image.rows/dispatchY,1);
        // Barrier will synchronize
        glMemoryBarrier(GL_TEXTURE_UPDATE_BARRIER_BIT);
    }
    // disable shader
    shaderPtr->disable();

    // Here result is now the result of the last pass.
}

Sometimes I get strange results (glitchy textures, partially rendered textures), also the first pixel (at 0,0) is sometimes not written. Did I set up everything correctly or am I something missing? It seems that this method with the textures is really slow, is there any alternative which will increase performance?

Edit1: Changed memorybarrier flag.


回答1:


glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);

This is the wrong barrier. The barrier specifies how you intend to access the data after the incoherent accesses. If you're trying to read from the texture with glGetTexImage, you must use GL_TEXTURE_UPDATE_BARRIER_BIT.




回答2:


I'm not 100% sure if this will fix your problem or not; but I don't see anything apparently wrong with your flags for initializing your texture settings. When I was comparing your code to my project it was the order of API calls that caught my attention. In your source you have this order:

glGenTextures(...);    // Generate
glActiveTexture(...);  // Set Active
glBindTexture(...);    // Bind Texture
glTexParameteri(...);  // Wrap Setting
glTexParameteri(...);  // Wrap Setting
glTexParameteri(...);  // Mipmap Setting
glTexParameteri(...);  // Mipmap Setting
glBindTexture(...);    // Bind / Unbind

and you repeat this for each texture except for the passing of the texture variable and the increase in the id value.

I don't know if it will make a difference but with in my engine and following the logical path that I have set up; try doing it in this order and see if it makes any difference

glGenTextures(...);    // Generate
glBindTexture(...);    // Bind Texture
glTexParameteri(...);  // Wrap Setting
glTexParameteri(...);  // Wrap Setting
glTexParameteri(...);  // Mipmap Setting
glTexParameteri(...);  // Mipmap Setting

glActiveTexture(...);  // Set Active
glBindTexture(...);    // Bind / Unbind

I'm not using compute shaders, but within my engine I have several classes that manages different things. I have an Asset Storage which will save all assets into a memory database including textures for images, I have a ShaderManager class to manage different shaders which is currently only using the vertex and fragment shaders. It will read in and compile the shader files, create the shader programs, set the attributes and uniforms, link the programs and run the shaders. I am using a batch process where I have a batch class and a batch manager class to rendering different types of primitives. So when I was going through my solution and following the path or flow of logic this is what I was seeing in my code.

It was the AssetStorage class that was setting up the properties for the textures and it was calling these API calls in this order within its add() function for adding textures into memory.

 glGenTextures(...);
 glBindTextures(...);
 glTexParameteri(...);
 glTexParameteri(...);
 glTexParameteri(...);
 glTexParameteri(...);

Then the AssetStorage was calling these as well

glPixelStorei(...);
glTexImage2D(...)

And the function to add textures into the AssetStorage would finally return a custom structure of a TextureInfo object.

When I checked my Batch Class under its render() function call this is where it was calling the ShaderManager's function to set the uniforms to use textures, then calling the ShaderManager's function to set the texture and then again to set the uniform if the texture contained an alpha channel. Within the ShaderManger class for the setTexture() function this is where the glActiveTexture() and glBindTexture() are finally being called.

So in short summary try moving your glActiveTexture() call to be in between the last glTexParameter() and the last glBindTexture() calls for both textures. I'm thinking it should also come after these two calls as well glPixelStorei() & glTexImage2D() due to the fact that you want to make the texture Active just as you are about to render it.

As I did mention earlier I'm not 100% sure if this is the root cause of your problem but I do believe it is worth the shot of trying it to see if it does help you or not. Please let me know what happens if you try this. I'd like to know for myself if the ordering of these API calls has any effect to it. I would try it in my own solution but I don't want to break my classes or project for it is currently working properly.

As a note, the only thing with the flags for your texture settings is within the wrap/repeat sections. You could instead try using GL_REPEAT for the first two glTexParameteri() calls instead of using GL_CLAMP_TO_EDGE and let me know what you come up with, you shouldn't have to worry about the mipmap settings for the last two glTexParameteri() calls because it appears that you are not using mipmaps from the settings that you are using.




回答3:


I could solve this problem finally!

The problem lied in the cv::Mat's constructor. The following line only creates a header for the cv::Mat:

cv::Mat result(image.rows,image.cols,image.type());

It DOES allocate data but it does NOT initialize that data, that's why I got these strange results. It was garbage in memory.

Using any function which allocates AND initializes this data solves the issue:

cv::Mat::zeros
cv::Mat::ones
cv::Mat::create


来源:https://stackoverflow.com/questions/40308450/opengl-compute-shader-strange-results

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!