I\'m trying to render frames grabbed and converted from a video using ffmpeg to an OpenGL texture to be put on a quad. I\'ve pretty much exhausted google and not found an an
Is the texture initialized when you callansweredglTexSubImage2D
? You need to callglTexImage2D
(not Sub) one time to initialize the texture object. Use NULL for the data pointer, OpenGL will then initialize a texture without copying data.
EDIT
You're not supplying mipmaping levels. So did you disable mipmaping?
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILER, linear_interpolation ? GL_LINEAR : GL_NEAREST);
EDIT 2 the image being upside down is no suprise as most image formats have the origin in the upper left, while OpenGL places the texture image's origin in the lower left. That banding you see there looks like wrong row stride.
EDIT 3
I did this kind of stuff myself about a year ago. I wrote me a small wrapper for ffmpeg, I called it aveasy https://github.com/datenwolf/aveasy
And this is some code to put the data fetched using aveasy into OpenGL textures:
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>
#include <string.h>
#include <math.h>
#include <GL/glew.h>
#include "camera.h"
#include "aveasy.h"
#define CAM_DESIRED_WIDTH 640
#define CAM_DESIRED_HEIGHT 480
AVEasyInputContext *camera_av;
char const *camera_path = "/dev/video0";
GLuint camera_texture;
int open_camera(void)
{
glGenTextures(1, &camera_texture);
AVEasyInputContext *ctx;
ctx = aveasy_input_open_v4l2(
camera_path,
CAM_DESIRED_WIDTH,
CAM_DESIRED_HEIGHT,
CODEC_ID_MJPEG,
PIX_FMT_BGR24 );
camera_av = ctx;
if(!ctx) {
return 0;
}
/* OpenGL-2 or later is assumed; OpenGL-2 supports NPOT textures. */
glBindTexture(GL_TEXTURE_2D, camera_texture[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGB,
aveasy_input_width(ctx),
aveasy_input_height(ctx),
0,
GL_BGR,
GL_UNSIGNED_BYTE,
NULL );
return 1;
}
void update_camera(void)
{
glPixelStorei( GL_UNPACK_SWAP_BYTES, GL_FALSE );
glPixelStorei( GL_UNPACK_LSB_FIRST, GL_TRUE );
glPixelStorei( GL_UNPACK_ROW_LENGTH, 0 );
glPixelStorei( GL_UNPACK_SKIP_PIXELS, 0);
glPixelStorei( GL_UNPACK_SKIP_ROWS, 0);
glPixelStorei( GL_UNPACK_ALIGNMENT, 1);
AVEasyInputContext *ctx = camera_av;
void *buffer;
if(!ctx)
return;
if( !( buffer = aveasy_input_read_frame(ctx) ) )
return;
glBindTexture(GL_TEXTURE_2D, camera_texture);
glTexSubImage2D(
GL_TEXTURE_2D,
0,
0,
0,
aveasy_input_width(ctx),
aveasy_input_height(ctx),
GL_BGR,
GL_UNSIGNED_BYTE,
buffer );
}
void close_cameras(void)
{
aveasy_input_close(camera_av);
camera_av=0;
}
I'm using this in a project and it works there, so this code is tested, sort of.
The file being used is a .wmv at 854x480, could this be the problem? The fact I'm just telling it to go 512x256?
Yes!
The striped pattern is an obvious indication that you're mismatching data sizes (row-size.). (Since the colors are correct, RGB vs BGR vs BGRA and n-components is correct.)
You're telling OpenGL that the texture you're uploading is 512x256 (which it isn't, AFAICT). Use the real dimensions (NPOT, your card ought to support it if it's not ancient).
Otherwise, resize/pad your data before uploading it as a 1024x512 texture.
Update
I'm more familiar with OpenGL that the other functions you're calling.
sxs_scale might to what you want (i.e. scaling the image down to a pot-size). However, scaling each frame might be slow.
I'd use the padding instead (which means, copy a small image (your video) into a part of a big texture (opengl)
Some other tips: