android ffmpeg opengl es render movie

百般思念 提交于 2019-12-18 11:37:16

问题


I am trying to render video via the NDK, to add some features that just aren't supported in the sdk. I am using FFmpeg to decode the video and can compile that via the ndk, and used this as a starting point. I have modified that example and instead of using glDrawTexiOES to draw the texture I have setup some vertices and am rendering the texture on top of that (opengl es way of rendering quad).

Below is what I am doing to render, but creating the glTexImage2D is slow. I want to know if there is any way to speed this up, or give the appearance of speeding this up, such as trying to setup some textures in the background and render pre-setup textures. Or if there is any other way to more quickly draw the video frames to screen in android? Currently I can only get about 12fps.

glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, textureConverted);

//this is slow
glTexImage2D(GL_TEXTURE_2D, /* target */
0, /* level */
GL_RGBA, /* internal format */
textureWidth, /* width */
textureHeight, /* height */
0, /* border */
GL_RGBA, /* format */
GL_UNSIGNED_BYTE,/* type */
pFrameConverted->data[0]);

glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);

EDIT I changed my code to initialize a gltextImage2D only once, and modify it with glSubTexImage2D, it didn't make much of an improvement to the framerate.

I then modified the code to modify a native Bitmap object on the NDK. With this approach I have a background thread that runs that process the next frames and populates the bitmap object on the native side. I think this has potential, but I need to get the speed increased of converting the AVFrame object from FFmpeg into a native bitmap. Below is currently what I am using to convert, a brute force approach. Is there any way to increase the speed of this or optimize this conversion?

static void fill_bitmap(AndroidBitmapInfo*  info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine;

int  yy;
for (yy = 0; yy < info->height; yy++) {
    uint8_t*  line = (uint8_t*)pixels;
    frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);

    int xx;
    for (xx = 0; xx < info->width; xx++) {
        int out_offset = xx * 4;
        int in_offset = xx * 3;

        line[out_offset] = frameLine[in_offset];
        line[out_offset+1] = frameLine[in_offset+1];
        line[out_offset+2] = frameLine[in_offset+2];
        line[out_offset+3] = 0;
    }
    pixels = (char*)pixels + info->stride;
}
}

回答1:


Yes, texture (and buffer, and shader, and framebuffer) creation is slow.

That's why you should create texture only once. After it is created, you can modify its data by calling glSubTexImage2D.

And to make uploading texture data more faster - create two textures. While you use one to display, upload texture data from ffmpeg to second one. When you display second one, upload data to first one. And repeat from beginning.

I think it will still be not very fast. You could try to use jnigraphics library that allows to access Bitmap object pixels from NDK. After that - you just diplay this Bitmap on screen on java side.




回答2:


A couple of minor additions will solve your problem, first convert your AVFrame to RGB with swscale, then apply it directly to your texture i.e.:

AVPicture *pFrameConverted;
struct SwsContext img_convert_ctx;

void init(){
    pFrameConverted=(AVPicture *)avcodec_alloc_frame();
    avpicture_alloc(pFrameConverted, AV_PIX_FMT_RGB565, videoWidth, videoHeight);
    img_convert_ctx = sws_getCachedContext(&img_convert_ctx, 
                    videoWidth, 
                    videoHeight,
                    pCodecCtx->pix_fmt,
                    videoWidth,
                    videoHeight,
                    AV_PIX_FMT_RGB565,
                    SWS_FAST_BILINEAR, 
                    NULL, NULL, NULL );
    ff_get_unscaled_swscale(img_convert_ctx);
}

void render(AVFrame* pFrame){
    sws_scale(img_convert_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pFrame->height, pFrameConverted->data, pFrameConverted->lineSize);
    glClear(GL_COLOR_BUFFER_BIT);
    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, videoWidth, videoHeight, GL_RGB, GL_UNSIGNED_BYTE, pFrameConverted->data[0]);
    glDrawTexiOES(0, 0, 0, videoWidth, videoHeight);
}



回答3:


Oh,maybe you can use jnigraphics as https://github.com/havlenapetr/FFMpeg/commits/debug. but if when you get yuv data after decode frame,you should convert it to RGB555,it is too slowly.Use android's mediaplayer is a good idea




回答4:


Yes, you can optimized this code:

static void fill_bitmap(AndroidBitmapInfo*  info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine;

int  yy;
for (yy = 0; yy < info->height; yy++)
 {
    uint8_t*  line = (uint8_t*)pixels;
    frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);

    int xx;
    for (xx = 0; xx < info->width; xx++) {
        int out_offset = xx * 4;
        int in_offset = xx * 3;

        line[out_offset] = frameLine[in_offset];
        line[out_offset+1] = frameLine[in_offset+1];
        line[out_offset+2] = frameLine[in_offset+2];
        line[out_offset+3] = 0;
    }
    pixels = (char*)pixels + info->stride;
}
}

to be something like:

static void fill_bitmap(AndroidBitmapInfo*  info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine = (uint8_t *)pFrame->data[0];

int  yy;
for (yy = 0; yy < info->height; yy++)
 {
    uint8_t*  line = (uint8_t*)pixels;

    int xx;

    int out_offset = 0;
    int in_offset = 0;

    for (xx = 0; xx < info->width; xx++) {
        int out_offset += 4;
        int in_offset += 3;

        line[out_offset] = frameLine[in_offset];
        line[out_offset+1] = frameLine[in_offset+1];
        line[out_offset+2] = frameLine[in_offset+2];
        line[out_offset+3] = 0;
    }
    pixels = (char*)pixels + info->stride;

    frameLine += pFrame->linesize[0];
}
}

That will save you some cycles.



来源:https://stackoverflow.com/questions/8867616/android-ffmpeg-opengl-es-render-movie

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!