libav

Libav build for Android [closed]

微笑、不失礼 提交于 2020-01-01 09:13:26
问题 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 7 years ago . Does anyone succeed compiling Libav for Android. I currently looking for documentation. Thanks! 回答1: - FFmpeg compiled with Android NDK(go to this link) Android built-in codec is too small, so we need to FFmpeg.

Webm (VP8 / Opus) file read and write back

十年热恋 提交于 2019-12-25 16:54:37
问题 I am trying to develop a webrtc simulator in C/C++. For media handling, I plan to use libav . I am thinking of below steps to realize media exchange between two webrtc simulator. Say I have two webrtc simulators A and B . Read media at A from a input webm file using av_read_frame api. I assume I will get the encoded media (audio / video) data, am I correct here? Send the encoded media data to simulator B over a UDP socket. Simulator B receives the media data in UDP socket as RTP packets.

What to do when last_pts > current_pts using ffmpeg libs (C++)

ぐ巨炮叔叔 提交于 2019-12-25 03:17:19
问题 Im having some hard time figuring out where to find about this.. Im building a simple recorder to learn about this video compression universe and Im facing some weird behaviors.. Before all I need to explain the scenario... Its very simple... everytime I call av_read_frame( input_context, input_packet ) I save the pts into the last_pts variable... So... Whats bothering me is the fact that about 10% of my calls to av_read_frame I get input_packet.pts > last_pts Resulting in a error message

Error inclunding <libavformat/avformat.h> in FFMPEG project on a Mac using clang

跟風遠走 提交于 2019-12-24 21:42:35
问题 I'm having trouble running the remuxing.c example code. I get the following error. I have confirmed that the files can be found in /usr/local/include . I am running macOS Sierra 10.12.6. $ cc -v playground/remuxing.c Apple LLVM version 9.0.0 (clang-900.0.39.2) Target: x86_64-apple-darwin16.7.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin "/Library/Developer/CommandLineTools/usr/bin/clang" -cc1 -triple x86_64-apple-macosx10.12.0 -Wdeprecat ed-objc-isa-usage

How to disable libav autorotate display matrix

为君一笑 提交于 2019-12-23 01:58:30
问题 I have a video taken from my mobile in portrait mode. Here is the dumped info about the video: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.MOV': Metadata: major_brand : qt minor_version : 0 compatible_brands: qt creation_time : 2017-05-04 02:21:37 Duration: 00:00:06.91, start: 0.000023, bitrate: 4700 kb/s Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 90 kb/s (default) Metadata: creation_time : 2017-05-04 02:21:37 handler_name : Core Media Data Handler Stream

Muxing AVPackets into mp4 file - revisited

纵饮孤独 提交于 2019-12-23 00:29:11
问题 I'm refering to this thread here: Muxing AVPackets into mp4 file The question over there is mainly the same that I have and the first answer looks very promising. The source (somkind of pseudo) code that the user pogorskiy provides seems to do exactly what I need: AVOutputFormat * outFmt = av_guess_format("mp4", NULL, NULL); AVFormatContext *outFmtCtx = NULL; avformat_alloc_output_context2(&outFmtCtx, outFmt, NULL, NULL); AVStream * outStrm = av_new_stream(outFmtCtx, 0); AVCodec * codec =

H264 Video Streaming over RTMP on iOS

假装没事ソ 提交于 2019-12-22 10:05:22
问题 With a bit of digging, I have found a library that extracts NAL units from .mp4 file while it is being written. I'm attempting to packetize this information to flv over RTMP using libavformat and libavcodec . I setup a video stream using: -(void)setupVideoStream { int ret = 0; videoCodec = avcodec_find_decoder(STREAM_VIDEO_CODEC); if (videoCodec == nil) { NSLog(@"Could not find encoder %i", STREAM_VIDEO_CODEC); return; } videoStream = avformat_new_stream(oc, videoCodec); videoCodecContext =

h264 annexb bitstream to flv mux ffmpeg library

你。 提交于 2019-12-21 05:15:40
问题 I have an IP Camera which gives H264 annexb Bitstream through SDK calls. I want to pack this video stream into FLV container. So far I've got to know the following :- I have to convert H264 annexb to H264 AVCC : For this I'll have to replace NAL header byte (0x00000001) with Size of NALU (big endian format). My question is, What do I do with SPS and PPS ? should I write (av_interleaved_write_frame) them as are after replacing the NAL header ? or do I not write these frames at all ? I read

What's wrong with my use of timestamps/timebases for frame seeking/reading using libav (ffmpeg)?

不打扰是莪最后的温柔 提交于 2019-12-20 19:56:49
问题 So I want to grab a frame from a video at a specific time using libav for the use as a thumbnail. What I'm using is the following code. It compiles and works fine (in regards to retrieving a picture at all), yet I'm having a hard time getting it to retrieve the right picture . I simply can't get my head around the all but clear logic behind libav's apparent use of multiple time-bases per video. Specifically figuring out which functions expect/return which type of time-base. The docs were of

How can I turn libavformat error messages off

感情迁移 提交于 2019-12-18 22:14:17
问题 By default, libavformat writes error messages to stderr , Like: Estimating duration from bitrate, this may be inaccurate How can I turn it off? or better yet, pipe it to my own neat logging function? Edit: Redirecting stderr to somewhere else is not acceptable since I need it for other logging purposes, I just want libavformat to not write to it. 回答1: Looking through the code, it appears you can change the behavior by writing your own callback function for the av_log function. From the