video-processing

Conversion failed. 2 frames left in the queue on closing ffmpeg

血红的双手。 提交于 2020-03-05 05:04:27
问题 My simplified ffmpeg command (the longer one has over 300 files) is the following. ffmpeg -i "v1.mp4" -i "v2.mp4" -i "v3.mp4" -filter_complex "[0:v:0][1:v:0][2:v:0]concat=n=3:v=1:a=0,fps=fps=30[cv1]; [0:a:0][1:a:0][2:a:0]concat=n=3:v=0:a=1,asetpts=N/SR/TB[ca1]; [cv1]setpts=0.25*PTS[v4]; [ca1]atempo=4,asetpts=N/SR/TB[a4]" -c:v h264_nvenc -map "[v4]" -map "[a4]" x4_output_0.mp4 The video encoding is working but then breaks and the output file seems to be truncated. The output files are nearly

Best practise for video ground truthing?

左心房为你撑大大i 提交于 2020-02-06 06:24:08
问题 I would like to train a deep learning framework (TensorFlow) for object detection with a new object category. As source for the ground truthing I have multiple video files which contain the object (only part of the image contains the object). How should I ground truth the video? Should I extract frame by frame and label every frame even when those video frames will be quite similar? Or what would be best practise for such a task? Open source tools are preferred. 回答1: It usually works as you

VideoCapture() to read multiple videos and frame resolution problem

一曲冷凌霜 提交于 2020-02-05 02:53:27
问题 According to the answer from this article which refer to the way to combine single image into a 4 side. From there, I want to change from using only single video to use 4 videos as an input. This is my code which used single video as an input import cv2 import numpy as np def make4side(image, scale=0.5): # image = cv2.imread(image) h = int((scale*image.shape[0])) #height w = int((scale*image.shape[1])) #width image = cv2.resize(image, (w,h ), interpolation = cv2.INTER_AREA) #shrink image to

Error while opening video file from URL and SeekFrame not working Xuggler

前提是你 提交于 2020-01-25 06:40:13
问题 I have video in Azure blob container. I opened the connection using proxy and get the inputStream from the connection and passing to Xuggler. HttpURLConnection conn = null; boolean isUseProxyConnection = proxyFileValues.isUseProxyConnection(); // Use Proxy if (isUseProxyConnection) { proxy = AzureBlobStorageProxyConnection(); } URL urlPath = new URL(inputFile); if (isUseProxyConnection) { // Open Via Proxy conn = (HttpURLConnection) urlPath.openConnection(proxy); } else { // Open without

Removal of Human Voice from a video or audio file

故事扮演 提交于 2020-01-24 05:30:13
问题 Is there a way by which i can remove the human voice from a audio/video. So ultimately the music is left on it. I want to do this using any software like adobe etc or with command line like ffmpeg/sox. But i prefer command line for tuning up the settings easily. 回答1: I've been working with karaoke for a while. There is no way to reliably remove vocal from the song which would result in acceptable quality music. There are certain ways to do it, the most popular relying on a fact that the voice

FFMPEG -filter_complex drawtext with style like bold italic and underline

こ雲淡風輕ζ 提交于 2020-01-23 03:07:28
问题 I am trying to add text on padded area of my video. There are 4 to 5 things that i am not able to do 1) Draw text styling (bold, italic, underline) 2) Padded area opacity 3) Subtitle vertical alignment. When i give some value to VAlign it sometimes goes out of the window. how to calculate correctly like 50px from top or let say 200px from bottom 4) The subtitle should be full width to video. Right now it is like this. 5) Having a hard time providing OutlineColour value. I have RGBA value so

playing a specific interval of a video in mplayer using command line option

北慕城南 提交于 2020-01-22 05:15:28
问题 I am using mplayer to play videos... I wanted to know if there are command line options to play a specific interval of a video in mplayer? For example, if I want to play a video file from 56 secs for a duration of 3 secs, then what would the command line options be? I know about the -ss option that will seek to a specific position, but how do I specify the duration that I want to play? Concretely, if I want a command that plays a video file starting at the beginning of the 56th second and

How can I solve “Unable to open 'raise.c' ” Error?(VSCODE , LINUX)

蓝咒 提交于 2020-01-16 08:36:34
问题 ( OS and Version: Ubuntu 18.4 , VS Code Version: Vscode 1.4 ,C/C++ Extension Version:0.26) Hello, I have read all the articles about "raise.c" and none of them solved my problem, I just wrote a simple OpenCV code which captures webcam's frames. each time I run my code it frequently shows an error. the error message is: Unable to open 'raise.c': Unable to read file (Error: File not found (/build/glibc-OTsEL5/glibc-2.27/sysdeps/unix/sysv/linux/raise.c)). launch.json is: { // Use IntelliSense to

How can I solve “Unable to open 'raise.c' ” Error?(VSCODE , LINUX)

故事扮演 提交于 2020-01-16 08:36:09
问题 ( OS and Version: Ubuntu 18.4 , VS Code Version: Vscode 1.4 ,C/C++ Extension Version:0.26) Hello, I have read all the articles about "raise.c" and none of them solved my problem, I just wrote a simple OpenCV code which captures webcam's frames. each time I run my code it frequently shows an error. the error message is: Unable to open 'raise.c': Unable to read file (Error: File not found (/build/glibc-OTsEL5/glibc-2.27/sysdeps/unix/sysv/linux/raise.c)). launch.json is: { // Use IntelliSense to

How to change Media Foundation Transform output frame(video) size?

懵懂的女人 提交于 2020-01-15 15:38:09
问题 I am writing a transform and want to change the output size of the frame and video. I inspected the sample and found out the order of function calling: SetInputType SetOutputType GetInputCurrentType SetInputType UpdateFormatInfo GetOutputCurrentType SetOutputType GetOutputStreamInfo SetProperties ProcessOutput (THROW NEED INPUT) ProcessInput ProcessOutput ProcessOutput (THROW .... .... repeat until done In which step do I need to modify the output size and how? Example: Input a 640x480 video,