avcodec

ffmpeg avcodec_encode_video2 hangs when using Quick Sync h264_qsv encoder

被刻印的时光 ゝ 提交于 2020-06-14 06:33:47
问题 When I use the mpeg4 or h264 encoders, I am able to successfully encode images to make a valid AVI file using the API for ffmpeg 3.1.0. However, when I use the Quick Sync encoder (h264_qsv), avcodec_encode_video2 will hang some of the time. I found that when using images that are 1920x1080, it was rare that avcodec_encode_video2 would hang. When using 256x256 images, it was very likely that the function would hang. I have created the test code below that demonstrates the hang of avcodec

How to fill audio AVFrame (ffmpeg) with the data obtained from CMSampleBufferRef (AVFoundation)?

那年仲夏 提交于 2020-01-13 09:44:07
问题 I am writing program for streaming live audio and video from webcamera to rtmp-server. I work in MacOS X 10.8, so I use AVFoundation framework for obtaining audio and video frames from input devices. This frames come into delegate: -(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection , where sampleBuffer contains audio or video data. When I recieve audio data in the sampleBuffer , I'm

ffmpeg::avcodec_encode_video setting PTS h264

江枫思渺然 提交于 2019-12-18 12:26:40
问题 I'm trying to encode video as H264 using libavcodec ffmpeg::avcodec_encode_video(codec,output,size,avframe); returns an error that I don't have the avframe->pts value set correctly. I have tried setting it to 0,1, AV_NOPTS_VALUE and 90khz * framenumber but still get the error non-strictly-monotonic PTS The ffmpeg.c example sets the packet.pts with ffmpeg::av_rescale_q() but this is only called after you have encoded the frame ! When used with the MP4V codec the avcodec_encode_video() sets the

GOP size for realtime video stream

◇◆丶佛笑我妖孽 提交于 2019-12-13 16:49:29
问题 I'm working on a kind of rich remote desktop system, with a video stream of the desktop encoded using avcodec/x264. I have to set manually the GOP size for the stream, and so far I was using a size of fps/2. But I've just read the following on Wikipedia: This structure [Group Of Picture@ suggests a problem because the fourth frame (a P-frame) is needed in order to predict the second and the third (B-frames). So we need to transmit the P-frame before the B-frames and it will delay the

How to fill audio AVFrame (ffmpeg) with the data obtained from CMSampleBufferRef (AVFoundation)?

╄→尐↘猪︶ㄣ 提交于 2019-12-05 09:30:29
I am writing program for streaming live audio and video from webcamera to rtmp-server. I work in MacOS X 10.8, so I use AVFoundation framework for obtaining audio and video frames from input devices. This frames come into delegate: -(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection , where sampleBuffer contains audio or video data. When I recieve audio data in the sampleBuffer , I'm trying to convert this data into AVFrame and encode AVFrame with libavcodec: aframe = avcodec_alloc_frame();

avcodec YUV to RGB

本秂侑毒 提交于 2019-12-04 01:36:05
问题 I'm trying to convert an YUV frame to RGB using libswscale. Here is my code : AVFrame *RGBFrame; SwsContext *ConversionContext; ConversionContext = sws_getCachedContext(NULL, FrameWidth, FrameHeight, AV_PIX_FMT_YUV420P, FrameWidth, FrameHeight, AV_PIX_FMT_RGB24, SWS_BILINEAR, 0, 0, 0); RGBFrame = av_frame_alloc(); avpicture_fill((AVPicture *)RGBFrame, &FillVect[0], AV_PIX_FMT_RGB24, FrameWidth, FrameHeight); sws_scale(ConversionContext, VideoFrame->data, VideoFrame->linesize, 0, VideoFrame-

avcodec YUV to RGB

时光怂恿深爱的人放手 提交于 2019-12-01 07:02:20
I'm trying to convert an YUV frame to RGB using libswscale. Here is my code : AVFrame *RGBFrame; SwsContext *ConversionContext; ConversionContext = sws_getCachedContext(NULL, FrameWidth, FrameHeight, AV_PIX_FMT_YUV420P, FrameWidth, FrameHeight, AV_PIX_FMT_RGB24, SWS_BILINEAR, 0, 0, 0); RGBFrame = av_frame_alloc(); avpicture_fill((AVPicture *)RGBFrame, &FillVect[0], AV_PIX_FMT_RGB24, FrameWidth, FrameHeight); sws_scale(ConversionContext, VideoFrame->data, VideoFrame->linesize, 0, VideoFrame->height, RGBFrame->data, RGBFrame->linesize); My program do SEGFAULT on the sws_scale function.

Understanding PTS and DTS in video frames

痴心易碎 提交于 2019-11-28 15:55:14
I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function: 1. AVFormatContext* outContainer = NULL; 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4"; 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264); 4. AVStream *outStream = avformat_new_stream(outContainer, encoder); 5. // outStream->codec initiation 6. // ... 7. avformat_write_header(outContainer, NULL); 8. // reading and decoding packet 9. // ... 10. avcodec_encode_video2(outStream-

Understanding PTS and DTS in video frames

邮差的信 提交于 2019-11-27 09:26:26
问题 I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function: 1. AVFormatContext* outContainer = NULL; 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4"; 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264); 4. AVStream *outStream = avformat_new_stream(outContainer, encoder); 5. // outStream->codec initiation 6. // ... 7. avformat_write_header