Decoding video directly into a texture in separate thread

为君一笑 提交于 2019-12-08 13:13:20

问题


Is it possible to decode video using ffmpeg capabilities directly into texture asynchroniously? I need to output the video onto a geometry.

There is mpv video player, which can output video directly into framebuffer and uses other close to metal features, but is there minimalistic example, which suitable for embedded devices (OpenGL ES 2.0 or 3.0)?

It would be nice if texture won't leave GPU memory during whole the frame time.


回答1:


I currently use sws_scale to trim the edges off mpegts streams frames as some frames will have 16 or even 32 extra pixels at the edge used when decoding. This isn't necessary for most uses. Instead I use it to copy direct into my own buffers.

ff->scale_context = sws_getContext(wid, hgt, ff->vid_ctx->pix_fmt,  // usually YUV420
                               wid, hgt, AV_PIX_FMT_YUV420P,        // trim edges and copy
                               SWS_FAST_BILINEAR, NULL, NULL, NULL);

// setup my buffer to copy the frame into

uint8_t *data[] = { vframe->yframe, vframe->uframe, vframe->vframe };
int linesize[4] = { vid_ctx->width, vid_ctx->width / 2, vid_ctx->width / 2, 0 };

int ret = sws_scale(scale_context,
          (const uint8_t **)frame->data, frame->linesize,
          0, vid_ctx->height,
          data, linesize);

You will need to adjust if the frames are in another format.

The GPU shader for openGL ES used which saves on a lot of overhead:

// YUV shader (converts YUV planes to RGB on the fly)

static char vertexYUV[] = "attribute vec4 qt_Vertex; \
attribute vec2 qt_InUVCoords; \
varying vec2 qt_TexCoord0; \
 \
void main(void) \
{ \
    gl_Position = qt_Vertex; \
    gl_Position.z = 0.0;\
    qt_TexCoord0 = qt_InUVCoords; \
} \
";

static char fragmentYUV[] = "precision mediump float; \
uniform sampler2D qt_TextureY; \
uniform sampler2D qt_TextureU; \
uniform sampler2D qt_TextureV; \
varying vec2 qt_TexCoord0; \
void main(void) \
{ \
    float y = texture2D(qt_TextureY, qt_TexCoord0).r; \
    float u = texture2D(qt_TextureU, qt_TexCoord0).r - 0.5; \
    float v = texture2D(qt_TextureV, qt_TexCoord0).r - 0.5; \
    gl_FragColor = vec4( y + 1.403 * v, \
                         y - 0.344 * u - 0.714 * v, \
                         y + 1.770 * u, 1.0); \
}";

If using NV12 format instead of YUV420 then the UV frames are interleaved and you just fetch the values using either "r, g" or "x, y" which ever swizzle you use.

Each YUV frame from your buffer is uploaded to "qt_TextureY, U and V".

As mentioned in the comments, FFMpegs build will automatically use HW decoding.

Also, to shave on CPU overhead I separate all decoding streams into their own threads.

Good luck. Anything else, just ask.



来源:https://stackoverflow.com/questions/45158254/decoding-video-directly-into-a-texture-in-separate-thread

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!