ffmpeg

FFmpeg Concat Filter High Memory Usage

ぃ、小莉子 提交于 2021-01-28 05:27:53
问题 I'm using FFmpeg to join many short clips into a single long video using the concat filter. FFmpeg seems to load all clips into memory at once and quickly runs out of RAM (For 100 clips it eats over 32GB). Is there a way to limit the memory used by the concat filter? The command I would use for 3 inputs is as follows: ffmpeg -i 0.mp4 -i 1.mp4 -i 2.mp4 -filter_complex "[0:v][0:a][1:v][1:a][2:v][2:a]concat=n=3:v=1:a=1" out.mp4 It seems to use around 200MB per additional input, which quickly

FFMPEG “Segmentation fault” with network stream source

♀尐吖头ヾ 提交于 2021-01-28 05:11:24
问题 I use release: 4.2.2 (static) from "https://johnvansickle.com/ffmpeg/" Final code will be on "Amazon AWS lambda" Goal: use a url stream and add watermak Link to video: ​https://feoval.fr/519.mp4 Link to Watermak: ​https://feoval.fr/watermark.png ./ffmpeg -i "https://feoval.fr/519.mp4" -i "./watermark.png" -filter_complex "overlay=W-w-10:H-h-10:format=rgb" -f "mp4" -movflags "frag_keyframe+empty_moov" -pix_fmt "yuv420p" test.mp4 return "Segmentation fault" I have the same error on my computer

how to make work opencv with FFMPEG driver

点点圈 提交于 2021-01-28 04:06:09
问题 I have a camera on my linuxbox it is working well: # $ ls -al /dev/video* # crw-rw----+ 1 root video 81, 0 janv. 8 16:13 /dev/video0 # crw-rw----+ 1 root video 81, 1 janv. 8 16:13 /dev/video1 # $ groups # adm cdrom sudo dip video plugdev lpadmin lxd sambashare docker libvirt From python with cv2 it work well with the default driver CAP_V4L2 >>> from pathlib import Path >>> import cv2 >>> print(cv2.VideoCapture(0, apiPreference=cv2.cv2.CAP_V4L2).isOpened()) True >>> I would like to access it

avoiding running of FFmpeg on terminal/cmd

老子叫甜甜 提交于 2021-01-28 03:50:00
问题 I'm using FFmpeg for a small project so I built a GUI basic application for video editing here is the image Everything is working fine but I just want to avoid opening the terminal for the FFmpeg process the reason the terminal is opening is because I used os.system("FFmpeg command here") so is there a way to import FFmpeg completely and avoid using terminal and run in code if u have any idea please suggest and let me know for gui i used PYQT5 and python to code Thank you Tried using

How do I split a video to 2 separate files for odd and even fields using ffmpeg?

旧街凉风 提交于 2021-01-28 02:41:46
问题 I have a h.264 progressive 640x480 video. I'm trying to create 2 separate video files, each is 640x240 and consisting only the odd fields and even fields seperately. Firstly I converted the file to interlaced but now I need to convert to 2 files. How do I do that based on yuv format? Once I'm done, I will encode the files separately and then rejoin them to a single interlaced file. How do I do that? 回答1: Starting with the full progressive video, you can do it in one step. ffmpeg -i in.mp4

Convert WebM/H.264 to MP4/H.264 efficiently with ffmpeg.js

五迷三道 提交于 2021-01-28 00:12:59
问题 As a result of the answer here: Recording cross-platform (H.264?) videos using WebRTC MediaRecorder How can one go about using ffmpeg.js to efficiently unwrap a webm h.264 video and re-wrap it into an mp4 container? I'm looking through the docs: https://github.com/Kagami/ffmpeg.js?files=1 However I don't see (or perhaps I'm looking for the wrong terminology) any examples for the above. This operation will be performed on the browser (chrome) prior to uploading as a Blob - I could use a web

How to get width and height from the H264 SPS using ffmpeg

别说谁变了你拦得住时间么 提交于 2021-01-27 17:23:09
问题 I am trying to initialize an FFMPEG H264 codec context filling the extradata field with the SPS frame like this : #include <libavcodec/avcodec.h> #include <libavformat/avformat.h> int main() { const char sps[] = {0x00, 0x00, 0x00, 0x01, 0x67, 0x42, 0x00, 0x0a, 0xf8, 0x41, 0xa2}; av_register_all(); av_log_set_level(AV_LOG_DEBUG); AVCodec *const codec = avcodec_find_decoder(CODEC_ID_H264); if (codec != NULL) { AVCodecContext* ctx = avcodec_alloc_context3(codec); ctx->debug = ~0; ctx->extradata

ffmpeg multiple rtsp cameras into sigle stream to youtube

北城余情 提交于 2021-01-27 15:01:27
问题 I have two rtsp ip cameras (dlink) and I want combine (merge) 2 stream in one video output and put it to yutube (live streaming). My first step is ok and my command is: ffmpeg -i "rtsp://xxxxxx:xxxxxx@192.168.1.164/live2.sdp" -i "rtsp://xxxxxx:xxxxxx@192.168.1.164/live2.sdp" -filter_complex " nullsrc=size=1600x448 [base]; [0:v] setpts=PTS-STARTPTS, scale=800x448 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=800x448 [upperright]; [base][upperleft] overlay=shortest=1 [base]; [base][upperright]

基于live555的rtsp播放器之二十:完结

依然范特西╮ 提交于 2021-01-26 08:14:18
这是我工作之余完成的第一个软件,全凭爱好坚持下来,花了不少时间,也收获很多。 博客中很多内容都是亲身实践所得,有些内容甚至可以说是“全网首发 ”,比如网上多是ffmpeg拉流后ffmpeg录制,没搜到live555拉流ffmpeg录制的相关实现。 开发过程中,参考了许多开源项目,比如VLC中关于live555拉流部分;同时获得了公司流媒体大佬的大力支持,这里一并感谢! 专栏中并未提到UDP丢包问题,实测用海康摄像头wifi链接家里100M宽带,客户端使用RTP over UDP模式拉流时,会出现丢包。客户端使用VLC时也会丢包,出现马赛克现象,说明live555源码中并未对丢包做过多的预防处理。但是把丢包问题留给上层处理是非常不合适的,因为上层不好判断哪一包丢了。比较靠谱的方法是深入live555源码,根据RTP包序号来判断丢包。 不过话说回来,UDP是不可靠传输协议,媒体包在因特网上传输时必然会面临着丢包。使用UDP的目的就是实时性好,如果加上丢包处理,势必会影响实时性。 UDP丢包处理的目的是用画面卡顿的代价来取代马赛克,然而RTP over TCP模式在相同的网络中就是卡顿无马赛克,那为何不直接用这种模式呢。 因此,个人觉得无特殊要求的话,这里没必要处理UDP丢包。 关于RTP over UDP和RTP over TCP的比较可以参考: http://thompsonng

FFMPEG ignores bitrate

时光毁灭记忆、已成空白 提交于 2021-01-26 04:29:27
问题 I am new to video encoding so bear with me. I am using FFMPEG. I have an mp4 file which is 640 x 350 with an average bitrate of around 2000kb (I think) and a filesize of 80Mb. I want to convert this to an ogv file with a much lower bit rate (128kb) but the same width and height. I am using the following command... ffmpeg -i input.mp4 -b:v 128k output.ogv ... but FFMPEG seems to ignore my bitrate option and outputs a file with a bitrate of around 600kb and a filesize of around 3Mb. I can do