video-processing

ffmpeg : how to determine frame rate automatically?

三世轮回 提交于 2020-08-25 18:59:56
问题 I use this simple script to convert video to images using ffmpeg , but frame rate is fixed, how can I determine it automatically? FRAME_RATE="30" SEPARATOR='/' VIDEO_PATH=$1 VIDEO_BASE_DIR=`dirname $1` FRAMES_DIR=$VIDEO_BASE_DIR$SEPARATOR"Frames" rm -rf $FRAMES_DIR mkdir $FRAMES_DIR #Convert video to images ./ffmpeg -r $FRAME_RATE -i $VIDEO_PATH $FRAMES_DIR$SEPARATOR"image%d.png" UPDATE: By ffprobe I checked that my 1st video frame rate is 30. Also results are the same (339 frames are

FFMPEG: webm to mp4 quality loss

混江龙づ霸主 提交于 2020-08-25 03:19:08
问题 When trying to convert a .webm video (two colored animation) to a .mp4 video using ffmpeg (3.4.2 on mac) the result is somewhat blurry. I did research this topic and tried different approaches to solve this. Here is the most promising command: ffmpeg -i vidoe.webm -qscale 1 video.mp4 However, the quality change is still tremendous, see the difference below. webm mp4 The resolution of the two videos is the same, however the size dropped from 24,3MB (.webm) to 1,5MB (.mp4) after conversion.

cv2.VideoWriter: Asks for a tuple as Size argument, then rejects it

╄→гoц情女王★ 提交于 2020-06-25 17:31:07
问题 I'm using OpenCV 4.0 and Python 3.7 to create a timelapse video. When constructing a VideoWriter object, the documentation says the Size argument should be a tuple. When I give it a tuple it rejects it. When I try to replace it with something else, it won't accept it because it says the argument isn't a tuple. When Size not a tuple: out = cv2.VideoWriter('project.avi', 1482049860, 30, height, width) SystemError: new style getargs format but argument is not a tuple When I changed Size to a

cv2.VideoWriter: Asks for a tuple as Size argument, then rejects it

邮差的信 提交于 2020-06-25 17:28:17
问题 I'm using OpenCV 4.0 and Python 3.7 to create a timelapse video. When constructing a VideoWriter object, the documentation says the Size argument should be a tuple. When I give it a tuple it rejects it. When I try to replace it with something else, it won't accept it because it says the argument isn't a tuple. When Size not a tuple: out = cv2.VideoWriter('project.avi', 1482049860, 30, height, width) SystemError: new style getargs format but argument is not a tuple When I changed Size to a

Rendering Canvas context in Worker thread for smooth playback

限于喜欢 提交于 2020-06-17 05:53:36
问题 I have two videos here showing the rendering of a decoded MJPEG video sequence this one only rending one video: Video description: Left (Source), Middle (Canvas copy of stream), Right (Decoded from network). At this point the video is smooth both from source to network and back (websocket). And at least up to 5 video decoded and rendered is reasonably smooth. However if I render like 20 videos things start to lag: My question is what is the best algorithm that will allow to render (or in

AVAssetImageGenerator returns sometimes same image from 2 successive frames

时间秒杀一切 提交于 2020-06-09 12:52:27
问题 I'm currently extracting every frame from a video with AVAssetImageGenerator , but sometimes it returns me successively 2 times almost the same image (they do not have the same "frame time"). The funny thing is it always happen (in my test video) each 5 frames. Here and here are the two images (open each in new tab then switch the tabs to see the differences). Here's my code : //setting up generator & compositor self.generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];

Wrong framerate during FFMPEG concatenation

白昼怎懂夜的黑 提交于 2020-06-01 04:47:07
问题 I'm concatenating 2 video files 00000 and 00001: ffmpeg -y -f concat -safe 0 -i file_list.txt -loglevel error -c copy test.mp4 file_list.txt: file '00000.mp4' file '00001.mp4' 00000.mp4: Duration: 00:00:00.42, start: 0.000000, bitrate: 204 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 182 kb/s, 60 fps, 60 tbr, 15360 tbn, 120 tbc (default) 00001.mp4: Duration: 00:00:01.63, start: 0.000000, bitrate: 58 kb/s Stream #0:0(und): Video: h264 (High) (avc1 /

FFmpeg — segmentation fault when trying to replace audio

浪尽此生 提交于 2020-04-11 18:33:51
问题 I have exactly this scenario: FFMPEG mux video and audio (from another video) - mapping issue. I want to mux the video stream from one file and the audio stream from another. I have renamed the files accordingly and, when trying to follow the answer ffmpeg -i input_0.mp4 -i input_1.mp4 -c copy -map 0:0 -map 1:1 -shortest out.mp4 or ffmpeg -i input_0.mp4 -i input_1.mp4 -c copy -map 0:v:0 -map 1:a:0 -shortest out.mp4 I get Segmentation fault (core dumped). The same happens when I follow this

Adding a structure to a NSDictionary

情到浓时终转凉″ 提交于 2020-03-27 18:24:13
问题 I am creating a CMVideoFormatDescriptionRef out of thin air using this code: CMVideoDimensions dimensions = { .width = width, .height = height }; CMVideoFormatDescriptionRef videoInfo = NULL; NSDictionary *options2 = [NSDictionary dictionaryWithObjectsAndKeys: @(YES), kCVPixelBufferCGImageCompatibilityKey, @(YES), kCVPixelBufferCGBitmapContextCompatibilityKey, dimensions, kCVImageBufferDisplayDimensionsKey, nil]; CFDictionaryRef dictOptionsRef = (__bridge CFDictionaryRef)options2;

Contour comparison in OpenCV (Convertion from C to C++)

走远了吗. 提交于 2020-03-05 05:56:29
问题 I am still new in C++ and now I need to convert some parts from this old program of mine from C to C++ because I want to apply BackgroundSubtractorMOG2 in my program since it only available in C++. Basically this program will detect contours from a video camera based on background subtraction and choose the largest contours available. I have a problem particularly on this part (taken from the old program): double largestArea = 0; //Const. for the largest area CvSeq* largest_contour = NULL; /