问题
I'm trying to do video processing using GLSL. I'm using OpenCV to open a video file up and take each frame as a single image an then I want to use each frame in a GLSL shader
What is the best/ideal/smart solution to using video with GLSL?
Reading From Video
VideoCapture cap("movie.MOV");
Mat image;
bool success = cap.read(image);
if(!success)
{
printf("Could not grab a frame\n\7");
exit(0);
}
Image to Texture
GLuint tex;
glGenTextures(1, tex);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.cols, image.rows, 0,
GL_BGR, GL_UNSIGNED_BYTE, image.data);
glUniform1i(glGetUniformLocation(shaderProgram, "Texture"), 0);
What needs to be in my render while loop?
Do I need to recompile/reattach/relink my shader every time? Or Once my shader is created and compiled and I use glUseProgram(shaderProgram)
can I keep sending it new textures?
The current loop I've been using to render a texture to the screen is as follows. How Could I adapt this to work with video? Where would I need to make my calls to update the texture being used in the shader?
while(!glfwWindowShouldClose(window))
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0,0,512,512);
glBindFramebuffer(GL_READ_FRAMEBUFFER, frameBuffer);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, image.cols, image.rows, 0, 0, image.cols, image.rows, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glfwSwapBuffers(window);
glfwPollEvents();
}
回答1:
Let's clarify a few things that needs to happen before the loop:
- Set the pixel storage mode with
glPixelStorei()
; - Generate only one texture with
glGenTextures()
, because at every iteration of the loop its content will be replaced with new data; - Compile the shader with
glCompileShader()
, useglCreateShader()
to create a shader object, invokeglCreateProgram()
to create a program, callglAttachShader()
to attach the shader object to the new program, and finallyglLinkProgram()
to make everything ready to go.
That said, every iteration of the loop must:
- Clear the color and depth buffer;
- Load the modelview matrix with the identity matrix;
- Specify the location where the drawing is going to happen
glTranslatef()
; - Retrieve a new frame from the video;
- Enable the appropriate texture target, bind it and then transfer the frame to the GPU with
glTexImage2D()
; - Invoke
glUseProgram()
to activate your GLSL shader; - Draw a 2D face using
GL_QUADS
or whatever; - Disable the program with
glUseProgram(0)
; - Disable the texture target with
glDisable(GL_TEXTURE_YOUR_TEXTURE_TARGET)
;
This is more or less what needs to be done.
By the way: here's my OpenCV/OpenGL/Qt application that retrieves frames from the camera and displays it in a window. No shaders, though.
Good luck!
回答2:
You don't need to use a framebuffer to send textures to the shader. Once you've got texture 0 selected as the active texture and 0 set as the value of the uniform sampler2D in your shader, every time you call glBindTexture()
, it will set the sampler2D to whichever texture you've specified in the function parameter. So no, you don't need to relink or recompile your shader each time you want to change texture.
来源:https://stackoverflow.com/questions/25879045/how-to-input-video-frames-into-a-glsl-shader