ffmpeg

Cannot Play Video Output of Libavcodec (ffmpeg) Encoding Example

独自空忆成欢 提交于 2021-01-29 08:40:34
问题 From FFMPEG's GitHub, I use the encode_video.c to generate a 1 second video. Here is the example in question: https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/encode_video.c I compile with: gcc -Wall -o ffencode encode_video.c -lavcodec -lavutil -lz -lm Clean compile, zero warnings. I test the program by running: ./ffencode video.mp4 libx264 Lots of stats printed out (expected based on source code) as well as ffmpeg logs, but ultimately no errors or warnings. However, then the

No such file error using pydub on OSX with pycharm

孤街醉人 提交于 2021-01-29 08:34:46
问题 My ultimate aim is to run the code snippet below on Lambda but as I was having difficulties, I tried running it on my mac. I get the same error running with python2.7 on OSX as I do when I run it on AWS lambda. The code is: from pydub import AudioSegment import os def test(): print("Starting") files = [f for f in os.listdir('.') if os.path.isfile(f)] for f in files: print (f) sound = AudioSegment.from_mp3("test.mp3") test() The output of the code from pycharm is: Starting ffmpeg .DS_Store

Is it possible to fetch some key-frames of a video by using the HTTP Range header

十年热恋 提交于 2021-01-29 08:29:16
问题 I've read the SO problem , and it seems not applying to my specific case. Is it possible to fetch some key-frames of a video from web server by the HTTP Range header? For example, for a 30 seconds duration video, we'd like to analysis the I-frame around 00:00:02, 00:00:15, 00:00:28. I need to analysis the videos from internal web server to detect if there's specific watermarks in it and some other analysis. Since the first I-frame might be invalid sometimes(Logo for example), we were planning

PHP + FFMPEG + S3. Transcode video directly between S3 buckets

眉间皱痕 提交于 2021-01-29 08:20:39
问题 I have: S3 INPUT bucket S3 OUTPUT bucket PHP ffmpeg Is it possible to read file directly from INPUT bucket → Transcode to another format → Save into OUTPUT bucket. Please let me know manuals, libraries, frameworks, any stuff that helps to understand how to do it. At least python realisations also welcome. At least at least some different language also OK. Input file size may be more than 10Gb so writing whole file in RAM is undesirable. Some chunk-based way is preferable. Output format is

How can I send a virtual camera to Genymotion or Android Studio Emulator in Ubuntu?

瘦欲@ 提交于 2021-01-29 08:07:16
问题 I created a virtual camera using v4l2loopback and ffmpeg. The command I use for ffmpeg is: ffmpeg -re -l oop 1 -i vin.png -vf format=yuv420p -f v4l2 /dev/video2 vin.png is the image I want to stream to the webcam and /dev/video2 is the virtual webcam I created with v4l2loopback. The virtual webcam works and I can see it e.g. with onlinemicetest.com/webcam-test. I'm using the Genymotion emulator with the newest Android API (I tried 7.0, 8.1 and 10.0) on Ubuntu 20.40. Genymotion detects the

ffmpeg: combining/ordering vidstab and crop filters

北战南征 提交于 2021-01-29 07:48:16
问题 I have a workflow which essentially takes a raw video file, crops away portions of the frame that aren't relevant, then performs a two-pass deshake filter using the vidstab filter. At the moment I'm running this as three distinct commands: one to do the crop, a second to do the vidstab "detect" pass, and a third to do the vidstab "transform" pass. My working script: # do the crop first and strip the audio nice -20 ffmpeg -hide_banner -ss $SEEK -i $INFILE -t $DURATION -preset veryfast -crf 12

Given a constant frame rate H.264 MP4, how to get PTS rounding method used during encoding using ffmpeg?

ぃ、小莉子 提交于 2021-01-29 07:39:13
问题 ffmpeg can convert the video to specified constant frame rate. It has round parameter to specify timestamp(PTS) rounding method. It could be zero , int , down , up and near . Given a constant frame rate H.264 mp4 video, how can we determine which PTS rounding method was used to encode this video? 来源: https://stackoverflow.com/questions/63956476/given-a-constant-frame-rate-h-264-mp4-how-to-get-pts-rounding-method-used-durin

Given a constant frame rate H.264 MP4, how to get PTS rounding method used during encoding using ffmpeg?

耗尽温柔 提交于 2021-01-29 07:35:37
问题 ffmpeg can convert the video to specified constant frame rate. It has round parameter to specify timestamp(PTS) rounding method. It could be zero , int , down , up and near . Given a constant frame rate H.264 mp4 video, how can we determine which PTS rounding method was used to encode this video? 来源: https://stackoverflow.com/questions/63956476/given-a-constant-frame-rate-h-264-mp4-how-to-get-pts-rounding-method-used-durin

Clipping audio/video with FFmpeg produces audio artifacts

末鹿安然 提交于 2021-01-29 07:34:13
问题 I am using FFmpeg, via the fluent-ffmpeg node.js library, to take in an mpeg-ts stream with H.264 video and AAC audio and extract specific segments with a filtergraph to then stream out. Unfortunately, FFmpeg appears to be adding priming packets to the AAC stream at the beginning of every retained clip, which results in noticeable audio artifacts on playback. Is there any way to turn that behavior off? 来源: https://stackoverflow.com/questions/64917498/clipping-audio-video-with-ffmpeg-produces

Clipping audio/video with FFmpeg produces audio artifacts

偶尔善良 提交于 2021-01-29 07:18:02
问题 I am using FFmpeg, via the fluent-ffmpeg node.js library, to take in an mpeg-ts stream with H.264 video and AAC audio and extract specific segments with a filtergraph to then stream out. Unfortunately, FFmpeg appears to be adding priming packets to the AAC stream at the beginning of every retained clip, which results in noticeable audio artifacts on playback. Is there any way to turn that behavior off? 来源: https://stackoverflow.com/questions/64917498/clipping-audio-video-with-ffmpeg-produces