Writing alpha channel into decoded ffmpeg frame

跟風遠走 提交于 2019-12-12 03:38:43

问题


I am looking for a fast way to add my own alpha channel to a decoded ffmpeg frame.

I have an AVI file with RGB information, and I have a synchronized video stream describing the transparency alpha channel (grayscale). While decoding the AVI file using ffmpeg, I want to convert the output frame to RGBA, while adding my own alpha information. In the end, I would obtain a semi-transparent video stream.

Is there any optimized function, maybe in libswscale or libswresample, to do such a thing better than just iterating through pixels ?

Basically I would like to be able to write such a function, if only I had such a function as sws_scale_and_add_alpha:

void* FFmpegLib_nextFrame_withAlpha(void* _handle, uint8_t* my_alpha_channel)
{
    FFmpegLibHandle* handle = (FFmpegLibHandle*)_handle;
    AVPacket        packet;
    int             frameFinished;

    while(av_read_frame(handle->pFormatCtx, &packet) >= 0) {
        // Is this a packet from the video stream?
        if(packet.stream_index==handle->videoStream) {
            // Decode video frame
            avcodec_decode_video2(handle->pCodecCtx, handle->pFrame, &frameFinished, &packet);
            // Did we get a video frame?
            if(frameFinished) {
                sws_scale_and_add_alpha
                (
                    handle->sws_ctx,
                    (uint8_t const * const *)handle->pFrame->data,
                    handle->pFrame->linesize,
                    0,
                    handle->pCodecCtx->height,
                    handle->pFrameARGB->data,
                    handle->pFrameARGB->linesize,
                    my_alpha_channel
                );

                return handle->pFrameARGB->data;
            }
        }
    }

    return NULL;
}

回答1:


I've thought about two ways to do this. Usually if I want to merge a alpha channel in command line, ffmpeg provides an alphamerge filter for this. And I'm pretty sure you can do the same thing in C, though it may be difficult to program, (even there is a video filter example in ffmpeg source.).

The second is just coding it ourselves, against AVFrame structure. The data field of AVFrame holds the pixels info. We need to pack our alpha channel into it.

First convert compressed image frame to packed ARGB as usual

// pFrameARGB should have been allocated and of pix_fmt `AV_PIX_FMT_ARGB`
sws_scale(sws_ctx, pFrame->data, pFrame->linesize, 0, height, pFrameARGB->data, pFrameARGB->linesize);

AVFrame.data is a multi-dimension array contain different planes. Here we have a packed ARGB image, not a planar one, so that data[0] contains all pixels we need.

// cpp example, easy to convert to pure C
auto p = pFrameARGB->data[0];
for (auto i = 0; i < width * height; i++) {
    auto num = i * sizeof(uint8_t) * 4;
    auto div_result = std::div(num, width * sizeof(uint8_t) * 4);

    auto offset = pFrameARGB->linesize * div_result.quot + div_result.rem;
    p[offset] = my_alpha_channel[i];
}


来源:https://stackoverflow.com/questions/38951088/writing-alpha-channel-into-decoded-ffmpeg-frame

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!