video-processing

Merge 2 image outputs (HDMI, DVI, VGA, or other) on one screen

[亡魂溺海] 提交于 2020-01-15 14:11:11
问题 I am looking for something uncommon: A device that would allow to merge the image coming from 2 outputs (HDMI, DVI, VGA or any other type of image output) into one final image displayed onscreen. The outputs can be of the same type (e.g. 2 HDMIs) or different, anything that works would do. In case that isn't clear, here is a simple schema: It could work in different ways, for example with a system of priority (e.g. output 1 write its image and the output 2 overwrite non transparent pixels),

Overlay a video on another video at specific time with FFmpeg

孤人 提交于 2020-01-14 06:07:46
问题 I am trying to overlay a video with another video. I followed the original command OP posted here. And it works, but it overlays the video from time 0: ffmpeg -i 720480.mp4 -i sdbug.mov -filter_complex "[0:0][1:0]overlay[out]" -shortest -map [out] -map 0:1 -pix_fmt yuv420p -c:a copy -c:v libx264 -crf 18 new.mp4 I tried the correct answer to specify a time, but it is not working for me: 1) the overlay starts at around second 12 , and 2) video is not played after the overlay is finished. ffmpeg

OpenCV doesn't save the video

北战南征 提交于 2020-01-13 02:30:09
问题 I'm using the following code to read a video from file, apply the canny edge algorithm and write the modified video to a file. The code compiles and runs perfectly. But, the video is not written! I'm utterly confused. Please tell me what the error is. The file is not created at all! OS: Ubuntu 12.10 Code for writing to the output file Opening the output file bool setOutput(const std::string &filename, int codec=0, double framerate=0.0, bool isColor=true) { outputFile= filename; extension

How to composite videos using multiple AVVideoCompositions

别说谁变了你拦得住时间么 提交于 2020-01-12 06:19:26
问题 I'm trying to figure out how to composite multiple videos (AVAssets) into a single video such that each of the videos goes through its own video composition. However, I can't see a way to accomplish this and was wondering if anyone had any ideas. Consider the following: The above picture illustrates what I'm trying to do. I want to take the video track from four different videos and merge them into a single video such that they play in a grid-like layout. Right now, I'm able to achieve this

ffmpeg splitting RGB and Alpha channels using filter

[亡魂溺海] 提交于 2020-01-10 02:17:32
问题 I'm trying to use ffmpeg to split an input file into two separate files: An MP4 with only R,G and B channels An MP4 with the "extracted" A channel (a so-called Key clip, see http://ffmpeg-users.933282.n4.nabble.com/quot-Extracting-quot-Alpha-Channel-td3700227.html) I've managed to do both, but now I want to combine them into one single command. Here's what I do: ffmpeg -r $FPS -y -i input.flv -vcodec libx264 -vpre ipod640 -acodec libfaac -s 256x256 -r $FPS -filter_complex INSERT_FILTER_HERE

ffmpeg splitting RGB and Alpha channels using filter

馋奶兔 提交于 2020-01-10 02:17:26
问题 I'm trying to use ffmpeg to split an input file into two separate files: An MP4 with only R,G and B channels An MP4 with the "extracted" A channel (a so-called Key clip, see http://ffmpeg-users.933282.n4.nabble.com/quot-Extracting-quot-Alpha-Channel-td3700227.html) I've managed to do both, but now I want to combine them into one single command. Here's what I do: ffmpeg -r $FPS -y -i input.flv -vcodec libx264 -vpre ipod640 -acodec libfaac -s 256x256 -r $FPS -filter_complex INSERT_FILTER_HERE

Get the width / height of the video from H.264 NALU

谁说胖子不能爱 提交于 2020-01-09 18:22:29
问题 I have gotten the SPS in NALU ( AVC Decoder Configuration Record ), and trying to parse the video width / height from it. 67 64 00 15 ac c8 60 20 09 6c 04 40 00 00 03 00 40 00 00 07 a3 c5 8b 67 80 This is my code parse the SPS but gets the wrong values. pic_width_in_mbs_minus1 is 5, and pic_height_in_map_units_minus1 is 1. Actually the video is 512 X 288px typedef struct _SequenceParameterSet { private: const unsigned char * m_pStart; unsigned short m_nLength; int m_nCurrentBit; unsigned int

Cant rotate video Android FFMPEG

六眼飞鱼酱① 提交于 2020-01-06 19:49:09
问题 I am using https://github.com/WritingMinds/ffmpeg-android-java. Can trim, join mp4 files but not able to rotate and speed up/down. Tried almost all SO answers but no luck. I am using latest Android Studio. D/ffmpeg_work: [-noautorotate, -i, /storage/emulated/0/Pictures/VideoApp/v_1479895157.mp4, -vf, transpose=1, /storage/emulated/0/Pictures/VideoApp/v_1480001945.mp4] D/FFmpeg: Running publishing updates method D/ffmpeg_work: onProgress: ffmpeg version n3.0.1 Copyright (c) 2000-2016 the

Video Transitions with GStreamer & GNonLin not working

可紊 提交于 2020-01-05 12:16:12
问题 I've been trying to combine 2 videos together with gstreamer with a short transition (like smpte) between them using gstreamer & gnonlin in python. However I can't get the gnloperation/smpte transition to work. Goal Below is a programme. I want it to play the first 4 sec of one file, and at 2 sec to start doing a smpte transition (that lasts for 2 seconds) to another file. (so the second file will start playing 2 seconds into the whole thing but be 'revealed' over the course of the 2 second

How to read yuv videos in matlab?

廉价感情. 提交于 2020-01-05 08:46:49
问题 I have a yuv video and I have to read it in Matlab for video processing. I have used mmreader but it appears that it only accepts avi and mpg files. VideoReader is not available in my version of Matlab and I don't think it supports yuv file extension. 回答1: do these code solve your problems? http://www.mathworks.com/matlabcentral/fileexchange/6318-convert-yuv-cif-420-video-file-to-image-files http://www.mathworks.com/matlabcentral/fileexchange/11252-yuv-file-to-matlab-movie 来源: https:/