How to convert RGB from YUV420p for ffmpeg encoder?

匿名 (未验证) 提交于 2019-12-03 02:32:02

问题:

I want to make .avi video file from bitmap images by using c++ code. I wrote the following code:

//Get RGB array data from bmp file uint8_t* rgb24Data = new uint8_t[3*imgWidth*imgHeight]; hBitmap = (HBITMAP) LoadImage( NULL, _T("myfile.bmp"), IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE); GetDIBits(hdc, hBitmap, 0, imgHeight, rgb24Data , (BITMAPINFO*)&bmi, DIB_RGB_COLORS);  /* Allocate the encoded raw picture. */ AVPicture dst_picture; avpicture_alloc(&dst_picture, AV_PIX_FMT_YUV420P, imgWidth, imgHeight);  /* Convert rgb24Data to YUV420p and stored into array dst_picture.data */ RGB24toYUV420P(imgWidth, imgHeight, rgb24Data, dst_picture.data); //How to implement this function?  //code for encode frame dst_picture here 

My problem is how to implement RGB24toYUV420P() function, this function will convert RGB24 data from array rgb24Data to YUV420p and store into array dst_picture.data for ffmpeg encoder?

回答1:

You can use the SwScale

Something like this:

#include  SwsContext * ctx = sws_getContext(imgWidth, imgHeight,                                   AV_PIX_FMT_RGB24, imgWidth, imgHeight,                                   AV_PIX_FMT_YUV420P, 0, 0, 0, 0); uint8_t * inData[1] = { rgb24Data }; // RGB24 have one plane int inLinesize[1] = { 3*imgWidth }; // RGB stride sws_scale(ctx, inData, inLinesize, 0, imgHeight, dst_picture.data, dst_picture.linesize) 

Note that you should create an instance of the SwsContext object only once, not for each frame.



回答2:

Runnable example on FFmpeg 2.7.6

This answer got me on the right path, but:

  • the API changed slightly since: SwsContext * must be struct SwsContext * instead
  • I wanted a minimal runnable example to test it out

The example synthesizes and encodes some colorful frames generated by generate_rgb.

ffmpeg_encoder_set_frame_yuv_from_rgb does the RGB24 to YUV conversion.

Preview of generated output.

#include  #include  #include  #include   static AVCodecContext *c = NULL; static AVFrame *frame; static AVPacket pkt; static FILE *file; struct SwsContext *sws_context = NULL;  static void ffmpeg_encoder_set_frame_yuv_from_rgb(uint8_t *rgb) {     const int in_linesize[1] = { 3 * c->width };     sws_context = sws_getCachedContext(sws_context,             c->width, c->height, AV_PIX_FMT_RGB24,             c->width, c->height, AV_PIX_FMT_YUV420P,             0, 0, 0, 0);     sws_scale(sws_context, (const uint8_t * const *)&rgb, in_linesize, 0,             c->height, frame->data, frame->linesize); }  uint8_t* generate_rgb(int width, int height, int pts, uint8_t *rgb) {     int x, y, cur;     rgb = realloc(rgb, 3 * sizeof(uint8_t) * height * width);     for (y = 0; y pts / 25) % 2 == 0) {                 if (y bit_rate = 400000;     c->width = width;     c->height = height;     c->time_base.num = 1;     c->time_base.den = fps;     c->gop_size = 10;     c->max_b_frames = 1;     c->pix_fmt = AV_PIX_FMT_YUV420P;     if (codec_id == AV_CODEC_ID_H264)         av_opt_set(c->priv_data, "preset", "slow", 0);     if (avcodec_open2(c, codec, NULL) format = c->pix_fmt;     frame->width  = c->width;     frame->height = c->height;     ret = av_image_alloc(frame->data, frame->linesize, c->width, c->height, c->pix_fmt, 32);     if (ret data[0]);     av_frame_free(&frame); }  /* Encode one frame from an RGB24 input and save it to the output file. Must be called after ffmpeg_encoder_start, and ffmpeg_encoder_finish must be called after the last call to this function. */ void ffmpeg_encoder_encode_frame(uint8_t *rgb) {     int ret, got_output;     ffmpeg_encoder_set_frame_yuv_from_rgb(rgb);     av_init_packet(&pkt);     pkt.data = NULL;     pkt.size = 0;     ret = avcodec_encode_video2(c, &pkt, frame, &got_output);     if (ret pts = pts;         rgb = generate_rgb(width, height, pts, rgb);         ffmpeg_encoder_encode_frame(rgb);     }     ffmpeg_encoder_finish();     free(rgb); }  int main(void) {     avcodec_register_all();     encode_example("tmp.h264", AV_CODEC_ID_H264);     encode_example("tmp.mpg", AV_CODEC_ID_MPEG1VIDEO);     /* TODO: is this encoded correctly? Possible to view it without container? */     /*encode_example("tmp.vp8", AV_CODEC_ID_VP8);*/     return 0; } 

Tested on Ubuntu 15.10. Code on GitHub.



标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!