ffmpeg

Bash while loop wait until task has completed

懵懂的女人 提交于 2021-01-02 05:31:29
问题 I have a bash script that I created to process videos from within a folder and it's subfolders: find . -type f -name '*.mkv' | while read file; do ffmpeg -i $file ... done The problem: Instead of the while loop waiting ffmpeg to complete, it continues iterate through the loop. The end result is, files not getting processed. I need a way to have the current while loop iteration to wait until ffmpeg is complete before continuing to the next. Or alternatively a way to queue these items. Edit: So

Why opencv videowriter is so slow?

扶醉桌前 提交于 2021-01-01 09:19:07
问题 Hi stackoverflow community, I have a tricky problem and I need your help to understand what is going on here. My program captures frames from a video grabber card (Blackmagic) which just works fine so far, at the same time I display the captured images with opencv (cv::imshow) which works good as well (But pretty cpu wasting). The captured images are supposed to be stored on the disk as well, for this I put the captured Frames (cv::Mat) on a stack, to finally write them async with opencv: cv:

Merging multiple video files with ffmpeg and xfade filter

五迷三道 提交于 2021-01-01 07:00:30
问题 I need to merge multiple video files (with included audio) into a single video. I've noticed xfade has been recently released and used it but I am running into an audio sync issue. All videos are in the same format / resolution / fame and bitrate / etc both for video and audio. Here is what I am using to merge 5 videos of various durations with 0.5 crossfade transitions: ffmpeg \ -i v0.mp4 \ -i v1.mp4 \ -i v2.mp4 \ -i v3.mp4 \ -i v4.mp4 \ -filter_complex \ "[0][1]xfade=transition=fade

Crop video into a 4x4 grid/tiles/matrix efficiently via command-line ffmpeg?

余生长醉 提交于 2021-01-01 06:39:55
问题 Hello Stackoverflow community! I dread having to ask questions, but there seems to be no efficient way to take a single input video and apply a matrix transformation/split the video into equal sized pieces, preferably 4x4=16 segments per input. I tried using all the libraries such as ffmpeg and mencoder, but having 16 outputs can be as slow as 0.15x. The goal of my project is the split the video into 16 segments, rearrange those segments and combine back into a final video; later reversing

Crop video into a 4x4 grid/tiles/matrix efficiently via command-line ffmpeg?

余生长醉 提交于 2021-01-01 06:37:47
问题 Hello Stackoverflow community! I dread having to ask questions, but there seems to be no efficient way to take a single input video and apply a matrix transformation/split the video into equal sized pieces, preferably 4x4=16 segments per input. I tried using all the libraries such as ffmpeg and mencoder, but having 16 outputs can be as slow as 0.15x. The goal of my project is the split the video into 16 segments, rearrange those segments and combine back into a final video; later reversing

Crop video into a 4x4 grid/tiles/matrix efficiently via command-line ffmpeg?

大城市里の小女人 提交于 2021-01-01 06:37:05
问题 Hello Stackoverflow community! I dread having to ask questions, but there seems to be no efficient way to take a single input video and apply a matrix transformation/split the video into equal sized pieces, preferably 4x4=16 segments per input. I tried using all the libraries such as ffmpeg and mencoder, but having 16 outputs can be as slow as 0.15x. The goal of my project is the split the video into 16 segments, rearrange those segments and combine back into a final video; later reversing

using ffmpeg to replace a single frame based on timestamp

廉价感情. 提交于 2021-01-01 02:15:47
问题 Is it possible to CLI ffmpeg to replace a specific frame at a specified interval with another image? I know how to extract all frames from a video, and re-stitch as another video, but I am looking to avoid this process, if possible. My goal: Given a video file input.mp4 Given a PNG file, image.png and given its known to occur at exactly a specific timestamp within input.mp4 create out.mp4 with image.png replacing that position of input.mp4 回答1: The basic command is ffmpeg -i video -i image \

FFMPEG 'Fontconfig error: Cannot load default config file' error Windows

▼魔方 西西 提交于 2020-12-31 16:21:26
问题 I am trying to make a short video in FFMPEG, where a word stays on screen for the duration of the video (0.5s). My FFMPEG code looks like this: ffmpeg -f lavfi -i color=c=white:s=320x240:d=0.5 -vf "drawtext=fontfile= ‘c\:\Windows\fonts\calibri.ttf':fontsize=18: fontcolor=black:x=(w-text_w)/2:y=(h-text_h)/2:text='word'" output.mp4 However, I keep getting the following error (see below for full output): Fontconfig error: Cannot load default config file [Parsed_drawtext_0 @ 000001c2918cef00]

ffmpeg exit status -1094995529

故事扮演 提交于 2020-12-30 07:38:27
问题 I'm developing an application that makes calls to ffprobe that return the unorthodox exit status of -1094995529 for certain files when on Windows. This exit status is given consistently, and there is some minor discussion of this. Why is this value given, and where is it documented? Can I expect this status to be different on a unix machine where the allowed exit statuses are more constrained? 回答1: Error codes from ffmpeg (error.h from avutil) : http://ffmpeg.org/doxygen/trunk/error_8h_source

Stream MP4 video successfully to RTMP with FFMPEG

一笑奈何 提交于 2020-12-30 06:43:35
问题 I'm attempted to stream an already recorded video file to twitch servers using FFMPEG but I only get audio so far no video. I've tried several settings, and different files (avi,etc) but I still get audio only. Here is my FFMPEG settings: ffmpeg -re -i test.mp4 -vcodec libx264 -preset fast -crf 30 -acodec aac -ab 128k -ar 44100 -strict experimental -f flv rtmp://live-dfw.twitch.tv/app/"TWITCHKEY" Has anyone nailed this? I'm using ffmpeg 0.8.17-6:0.8.17-1 under Ubuntu. 回答1: ffmpeg -re -i ~