libav

Calculate PTS before frame encoding in FFmpeg

可紊 提交于 2019-12-18 11:08:45
问题 How to calculate correct PTS value for frame before encoding in FFmpeg C API? For encoding I'm using function avcodec_encode_video2 and then write it by av_interleaved_write_frame . I found some formulas, but no one of them doesn't work. In doxygen example they are using frame->pts = 0; for (;;) { // encode & write frame // ... frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base); } This blog says that formula must be like this: (1 / FPS) * sample rate * frame number

iPhone SDK 4.3 libav compiling problem

风格不统一 提交于 2019-12-18 10:31:09
问题 I faced with strange problem. I installed iPhone SDK 4.3 and xCode 4 and now I can't compile libav from ffmpeg for ARMv6 architecture. This is my script to compile it (it works fine for iPhone SDK 4.2): ./configure \ --disable-doc --disable-ffmpeg --disable-ffplay --disable-ffserver --enable-cross-compile \ --enable-encoder=rawvideo \ --enable-decoder=h264 \ --enable-decoder=mpeg4 \ --enable-encoder=mjpeg \ --enable-muxer=rawvideo \ --enable-demuxer=h264 \ --enable-parser=h264 \ --enable

FFmpeg transcoding on Lambda results in unusable (static) audio

余生长醉 提交于 2019-12-18 04:23:35
问题 I'd like to move towards serverless for audio transcoding routines in AWS. I've been trying to setup a Lambda function to do just that; execute a static FFmpeg binary and re-upload the resulting audio file. The static binary I'm using is here. The Lambda function I'm using in Python looks like this: import boto3 s3client = boto3.client('s3') s3resource = boto3.client('s3') import json import subprocess from io import BytesIO import os os.system("cp -ra ./bin/ffmpeg /tmp/") os.system("chmod -R

FFmpeg: building example C codes

不打扰是莪最后的温柔 提交于 2019-12-17 21:34:26
问题 I have configured and compiled the FFmpeg library using this link: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu Now, I am trying to build example C codes provided by FFmpeg from here: https://github.com/FFmpeg/FFmpeg/tree/master/doc/examples However, when I run make install-examples or make install (suggested by /example/README), I receive this kind of message: make: *** No rule to make target '/doc/examples/README', needed by 'install-examples'. Stop. I thought this may be due to the

What are the differences and similarities between ffmpeg, libav, and avconv?

社会主义新天地 提交于 2019-12-16 19:56:52
问题 When I run ffmpeg on Ubuntu, it shows: $ ffmpeg ffmpeg version v0.8, Copyright (c) 2000-2011 the Libav developers built on Feb 28 2012 13:27:36 with gcc 4.6.1 This program is not developed anymore and is only provided for compatibility. Use avconv instead (see Changelog for the list of incompatible changes). Or it shows (depending on the Ubuntu version): $ ffmpeg ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:49:20 with gcc

avconv tool executed from Java run time stops encoding after 8 minutes

与世无争的帅哥 提交于 2019-12-13 05:29:08
问题 Here I am trying to encode live rtmp stream coming from Flash Media Server and broadcasting low bit rate stream by using avconv tool of Libav . Libav is installed on Ubuntu OS . The encoded stream runs for 8 mins only. As avconv tool is started by using java run time environment. The Java Code is given below - public class RunnableStream implements Runnable { String inStream,outStream,width,height,bitRate,frameRate,fname,line,ar,audioBitRate,audioChannel; public RunnableStream(String fname

Video from pipe->YUV with libAV->RGB with sws_scale->Draw with Qt

孤者浪人 提交于 2019-12-13 04:23:17
问题 I need to decode video from pipe or socket, then convert it set of images and draw with Qt(4.8.5!!). I'm using default example of libAV and adding to it what i need. Here is my code: AVCodec *codec; AVCodecContext *codecContext= NULL; int frameNumber, got_picture, len; FILE *f; AVFrame *avFrame, *avFrameYUV, *avFrameRGB; uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE]; AVPacket avpkt; av_init_packet(&avpkt); f = fopen("/tmp/test.mpg", "rb"); if (!f) { fprintf(stderr, "could not open

Filling CMediaType and IMediaSample from AVPacket for h264 video

谁说胖子不能爱 提交于 2019-12-12 14:59:44
问题 I have searched and have found almost nothing, so I would really appreciate some help with my question. I am writting a DirectShow source filter which uses libav to read and send downstream h264 packets from youtube's FLV file. But I can't find appropriate libav structure's fields to implement correctly filter's GetMediType() and FillBuffer(). Some libav fields is null. In consequence h264 decoder crashes in attempt to process received data. Where am I wrong? In working with libav or with

Read raw Genicam H.264 data to avlib

风格不统一 提交于 2019-12-11 17:18:59
问题 I try to get familiar with libav in order to process a raw H.264 stream from a GenICam supporting camera. I'd like to receive the raw data via the GenICam provided interfaces (API), and then forward that data into libav in order to produce a container file that then is streamed to a playing device like VLC or (later) to an own implemented display. So far, I played around with the GenICam sample code, which transferres the raw H.264 data into a "sample.h264" file. This file, I have put through

What is the official website for libavcodec?

六月ゝ 毕业季﹏ 提交于 2019-12-11 09:49:07
问题 I would like to use libavcodec for a project, the problem is i don't understand where i'm supposed to get the official release, this library is so popular that i can't tell what is the official website. for example there are 2 major projects like libav and ffmpeg that are using it but i can't find the official source. Where is this website ? 回答1: The official site is at ffmpeg.org. You can get the source with git or by using a release tarball. Git is recommended as it will provide the most up