libavcodec

Rotating a video during encoding with ffmpeg and libav API results in half of video corrupted

喜欢而已 提交于 2020-05-17 05:50:12
问题 I'm using the C API for ffmpeg/libav to rotate a vertically filmed iphone video during the encoding step. There are other questions asking to do a similar thing but they are all using the CLI tool to do so. So far I was able to figure out how to use the AVFilter to rotate the video, base off this example https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/filtering_video.c The problem is that half the output file is corrupt. Here is the code for my encoding logic. Its written with

ffmpeg API h264 encoded video does not play on all platforms

此生再无相见时 提交于 2020-01-22 12:56:34
问题 Edit: In the previous version I used a very old ffmpeg API. I now use the newest libraries. The problem has only changed slightly, from "Main" to "High". I am using the ffmpeg C API to create a mp4 video in C++. I want the resulting video to be of the profile "Constrained Baseline", so that the resulting video can be played on as much platforms as possible, especially mobile, but I get "High" profile every time, even though I hard coded the codec profile to be FF_PROFILE_H264_CONSTRAINED

FFMPEG with QT memory leak

会有一股神秘感。 提交于 2020-01-06 14:29:06
问题 Let me start with a code clip: QByteArray ba; ba.resize(500000); int encsize = avcodec_encode_video(context, (uint8_t*)ba.data(), 500000, frame.unownedPointer()); What I'm doing is encoding the data from frame and putting the data into the buffer pointed at QByteArray. If I comment out the avcodec_encode_video line my memory leak goes away. unownedPointer() looks like this: if (this->frame != NULL) return this->frame; this->frame = avcodec_alloc_frame(); uchar *data = this->img.bits(); frame-

RGB-frame encoding - FFmpeg/libav

隐身守侯 提交于 2020-01-01 03:37:13
问题 I am learning video encoding & decoding in FFmpeg. I tried the code sample on this page (only the video encoding & decoding part). Here the dummy image being created is in YCbCr format. How do I achieve similar encoding by creating RGB frames? I am stuck at: Firstly, how to create this RGB dummy frame? Secondly, how to encode it? Which codec to use? Most of them work with YUV420p only... EDIT: I have a YCbCr encoder and decoder as given on the this page. The thing is, I have RGB frame

Decoding opus using libavcodec from FFmpeg

删除回忆录丶 提交于 2019-12-23 19:23:12
问题 I am trying to decode opus using libavcodec. I am able to do it using libopus library alone. But I am trying to acheive same using libavcodec. I am trying to figure it out Why its not working in my case. I have an rtp stream and trying to decode it. The result in decoded packet is same as input. Decoded frame normally contain pcm values instead of that Im receving opus frame that actually I send. Please help me. av_register_all(); avcodec_register_all(); AVCodec *codec; AVCodecContext *c =

Broken output from libavcodec/swscale, depending on resolution

偶尔善良 提交于 2019-12-23 17:37:55
问题 I am writing a video conference software, I have a H.264 stream decoded with libavcoded into IYUV and than rendered into a window with VMR9 in windowless mode. I use a DirectShow graph to do so. To avoid unnecessary conversion into RGB and back (see link), I convert IYUV video into YUY2 before passing it to VMR9, with libswscale. I noticed that with video resolution of 848x480, output video is broken, so I investigated further and came up that for some resolutions video is always broken. To

How to encode resampled PCM-audio to AAC using ffmpeg-API when input pcm samples count not equal 1024

眉间皱痕 提交于 2019-12-23 09:42:46
问题 I am working on capturing and streaming audio to RTMP server at a moment. I work under MacOS (in Xcode), so for capturing audio sample-buffer I use AVFoundation-framework. But for encoding and streaming I need to use ffmpeg-API and libfaac encoder. So output format must be AAC (for supporting stream playback on iOS-devices). And I faced with such problem: audio-capturing device (in my case logitech camera) gives me sample-buffer with 512 LPCM samples, and I can select input sample-rate from

How to disable libav autorotate display matrix

为君一笑 提交于 2019-12-23 01:58:30
问题 I have a video taken from my mobile in portrait mode. Here is the dumped info about the video: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.MOV': Metadata: major_brand : qt minor_version : 0 compatible_brands: qt creation_time : 2017-05-04 02:21:37 Duration: 00:00:06.91, start: 0.000023, bitrate: 4700 kb/s Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 90 kb/s (default) Metadata: creation_time : 2017-05-04 02:21:37 handler_name : Core Media Data Handler Stream

How to convert YUV420P image to JPEG using ffmpeg's libraries?

南笙酒味 提交于 2019-12-20 04:15:28
问题 I'm trying to convert a YUV420P image ( AV_PIX_FMT_YUV420P ) to a JPEG using ffmpeg's libavformat and libavcodec . This is my code so far: AVFormatContext* pFormatCtx; AVOutputFormat* fmt; AVStream* video_st; AVCodecContext* pCodecCtx; AVCodec* pCodec; uint8_t* picture_buf; AVFrame* picture; AVPacket pkt; int y_size; int got_picture=0; int size; int ret=0; FILE *in_file = NULL; //YUV source int in_w = 720, in_h = 576; //YUV's width and height const char* out_file = "encoded_pic.jpg"; //Output

FFMpeg copy streams without transcode

旧街凉风 提交于 2019-12-19 03:58:27
问题 I'm trying to copy all streams from several files into one file without transcoding streams. Something you usually do with ffmpeg utility by ffmpeg -i “file_with_audio.mp4” -i “file_with_video.mp4” -c copy -shortest file_with_audio_and_video.mp4 This is the code: int ffmpegOpenInputFile(const char* filename, AVFormatContext **ic) { int ret; unsigned int i; *ic = avformat_alloc_context(); if (!(*ic)) return -1; // Couldn't allocate input context if((ret = avformat_open_input(ic, filename, NULL