ffmpeg

FFMPEG using Google Drive API instead of Shared URL

折月煮酒 提交于 2021-02-08 08:57:33
问题 We are using FFMPEG to stream a Google Drive URL into a node application. Is there an FFMPEG method or library we can use to stream to FFMPEG using the Google Drive API instead of using the standard public shared URL? At the moment using the URL works fine if the file size is <100mb but with bigger files we get an error: https://drive.google.com/uc?export=download&id=fileId: Invalid data found when processing input This is because we reach the pesky gDrive virus roadblock page: 回答1: From your

Combine mp4 files by order based on number from filenames in Python

戏子无情 提交于 2021-02-08 08:24:45
问题 I try to merge lots of mp4 files from a directory test into one output.mp4 using ffmpeg in Python. path = '/Users/x/Documents/test' import os for filename in os.listdir(path): if filename.endswith(".mp4"): print(filename) Output: 4. 04-unix,minix,Linux.mp4 6. 05-Linux.mp4 7. 06-ls.mp4 5. 04-unix.mp4 9. 08-command.mp4 1. 01-intro.mp4 3. 03-os.mp4 8. 07-minux.mp4 2. 02-os.mp4 10. 09-help.mp4 I have tried with the solution below from the reference here: ffmpy concatenate multiple files with a

FFMPEG Concat Audio at timestamp

喜夏-厌秋 提交于 2021-02-08 08:18:24
问题 I'm attempting to concat two mp3 audio files (fileA and fileB) at a specific timestamp for a specific duration only. Such that the audio in file A is replaced with that in file B for the aforementioned duration. Thus the end result should be: FileA - FileB(for duration) - FileA Can this be done with FFMPEG? 回答1: Yes. Assuming both files have the same sampling rate and channel count, you would create a text file like this, file a.mp3 outpoint 45 file b.mp3 inpoint 0 outpoint 23 file a.mp3

While loop in bash to read a file skips first 2 characters of THIRD Line

早过忘川 提交于 2021-02-08 08:16:24
问题 #bin/bash INPUT_DIR="$1" INPUT_VIDEO="$2" OUTPUT_PATH="$3" SOURCE="$4" DATE="$5" INPUT="$INPUT_DIR/sorted_result.txt" COUNT=1 initial=00:00:00 while IFS= read -r line; do OUT_DIR=$OUTPUT_PATH/$COUNT mkdir "$OUT_DIR" ffmpeg -nostdin -i $INPUT_VIDEO -vcodec h264 -vf fps=25 -ss $initial -to $line $OUT_DIR/$COUNT.avi ffmpeg -i $OUT_DIR/$COUNT.avi -acodec pcm_s16le -ar 16000 -ac 1 $OUT_DIR/$COUNT.wav python3.6 /home/Video_Audio_Chunks_1.py $OUT_DIR/$COUNT.wav python /home/transcribe.py --decoder

While loop in bash to read a file skips first 2 characters of THIRD Line

荒凉一梦 提交于 2021-02-08 08:14:11
问题 #bin/bash INPUT_DIR="$1" INPUT_VIDEO="$2" OUTPUT_PATH="$3" SOURCE="$4" DATE="$5" INPUT="$INPUT_DIR/sorted_result.txt" COUNT=1 initial=00:00:00 while IFS= read -r line; do OUT_DIR=$OUTPUT_PATH/$COUNT mkdir "$OUT_DIR" ffmpeg -nostdin -i $INPUT_VIDEO -vcodec h264 -vf fps=25 -ss $initial -to $line $OUT_DIR/$COUNT.avi ffmpeg -i $OUT_DIR/$COUNT.avi -acodec pcm_s16le -ar 16000 -ac 1 $OUT_DIR/$COUNT.wav python3.6 /home/Video_Audio_Chunks_1.py $OUT_DIR/$COUNT.wav python /home/transcribe.py --decoder

While loop in bash to read a file skips first 2 characters of THIRD Line

﹥>﹥吖頭↗ 提交于 2021-02-08 08:14:09
问题 #bin/bash INPUT_DIR="$1" INPUT_VIDEO="$2" OUTPUT_PATH="$3" SOURCE="$4" DATE="$5" INPUT="$INPUT_DIR/sorted_result.txt" COUNT=1 initial=00:00:00 while IFS= read -r line; do OUT_DIR=$OUTPUT_PATH/$COUNT mkdir "$OUT_DIR" ffmpeg -nostdin -i $INPUT_VIDEO -vcodec h264 -vf fps=25 -ss $initial -to $line $OUT_DIR/$COUNT.avi ffmpeg -i $OUT_DIR/$COUNT.avi -acodec pcm_s16le -ar 16000 -ac 1 $OUT_DIR/$COUNT.wav python3.6 /home/Video_Audio_Chunks_1.py $OUT_DIR/$COUNT.wav python /home/transcribe.py --decoder

Lossless RGB24 to YUV444 transformation

混江龙づ霸主 提交于 2021-02-08 07:55:42
问题 I am currently attempting to undergo lossless compression of RGB24 files using H264 on FFMPEG. However, the color space transformation used in the H264 compression (RGB24 -> YUV444) has proven to be lossy (I'm guessing due to quantisation error). Is there anything else I can use (eg a program) to transform my RGB24 files to YUV losslessly, before compressing them with lossless H264? The ultimate goal is to compress an RGB24 file then decompress it, with the decompressed file exactly matching

How to distinguish between identical cameras in Libav/ffmpeg?

这一生的挚爱 提交于 2021-02-08 07:01:54
问题 I have two identical cameras connected and using Libav/FFmpeg. The option settings are: format = "dshow" input = "video=Videology USB-C Camera" However, I am not able to distinguish between the two identical cameras. If I try to print out the list of devices, I get the following: $> FFmpeg -list_devices true -f dshow -i dummy [dshow @ 02597f60] DirectShow video devices [dshow @ 02597f60] "Integrated Camera" [dshow @ 02597f60] "Videology USB-C Camera" Last message repeated 1 times [dshow @

How to distinguish between identical cameras in Libav/ffmpeg?

风格不统一 提交于 2021-02-08 07:01:52
问题 I have two identical cameras connected and using Libav/FFmpeg. The option settings are: format = "dshow" input = "video=Videology USB-C Camera" However, I am not able to distinguish between the two identical cameras. If I try to print out the list of devices, I get the following: $> FFmpeg -list_devices true -f dshow -i dummy [dshow @ 02597f60] DirectShow video devices [dshow @ 02597f60] "Integrated Camera" [dshow @ 02597f60] "Videology USB-C Camera" Last message repeated 1 times [dshow @

Adding Border to the image using FFmpeg

∥☆過路亽.° 提交于 2021-02-08 05:44:35
问题 I wish to add a border to the single strip of images, using FFMPEG. I have been trying to search this thing on google, I tried this command ffmpeg -i input.jpg -vf "draw box= : x=50 : y=10 : w=104 : h=80 : color=white" output.jpg, I am unable to increase the border size as well as my desired color,Its work only black border.how can I increase the border and change the color to the single strip of images. Can anybody help me? 回答1: Adding border to an existing image is similar to add water mark