video-processing

How do I get a Video Thumbnail in .Net?

∥☆過路亽.° 提交于 2019-11-27 19:21:47
I'm looking to implement a function that retrieves a single frame from an input video, so I can use it as a thumbnail. Something along these lines should work: // filename examples: "test.avi", "test.dvr-ms" // position is from 0 to 100 percent (0.0 to 1.0) // returns a bitmap byte[] GetVideoThumbnail(string filename, float position) { } Does anyone know how to do this in .Net 3.0? The correct solution will be the "best" implementation of this function. Bonus points for avoiding selection of blank frames. I ended up rolling my own stand alone class (with the single method I described), the

FFMPEG: Extracting 20 images from a video of variable length

社会主义新天地 提交于 2019-11-27 18:25:10
I've browsed the internet for this very intensively, but I didn't find what I needed, only variations of it which are not quite the thing I want to use. I've got several videos in different lengths and I want to extract 20 images out of every video from start to the end, to show the broadest impression of the video. So one video is 16m 47s long => 1007s in total => I have to make one snapshot of the video every 50 seconds. So I figured using the -r switch of ffmpeg with the value of 0.019860973 (eq 20/1007) but ffmpeg tells me that the framerate is too small for it... The only way I figured

Fastest way to extract a specific frame from a video (PHP/ffmpeg/anything)

穿精又带淫゛_ 提交于 2019-11-27 18:06:47
I have a web page, which (among other things) needs to extract a specific frame from a user-uploaded video. The user seeks to a particular part of a .mp4 in the player, then clicks a button, and an ajax call gets fired off to a php script which takes the .mp4, and the exact time from the video, and uses that to extract a "thumbnail" frame. My current solution is using the php exec command: exec("ffmpeg -i $videoPath -ss $timeOffset -vframes 1 $jpgOutputPath"); ...which works just great, except it's as slow as molasses. My guess is that ffmpeg is a little too much for the job, and I might be

Reducing video size with same format and reducing frame size

白昼怎懂夜的黑 提交于 2019-11-27 17:04:41
This question might be very basic Is there a way to reduce the frame size/rate of Lossy compressed (WMV, MPEG) format, to get a smaller video, of lesser size, with same format. Are there any open source or proprietary apis for this? Jason B ffmpeg provides this functionality. All you need to do is run someting like ffmpeg -i <inputfilename> -s 640x480 -b 512k -vcodec mpeg1video -acodec copy <outputfilename> For newer versions of ffmpeg you need to change -b to -b:v : ffmpeg -i <inputfilename> -s 640x480 -b:v 512k -vcodec mpeg1video -acodec copy <outputfilename> to convert the input video file

Video processing with OpenCV in IOS Swift project

笑着哭i 提交于 2019-11-27 13:48:55
I've integrated opencv in Swift IOS project using bridging header (to connect Swift to Objective C) and a Objective C wrapper (to connect Objective C to C++). Using this method I can pass single images from the Swift code, analyse them in the C++ files and get them back. I've seen that opencv provides CvVideoCamera object that can be integrated with an Objective C UIViewController. But since my UIViewController are written in Swift I've wondered if this is possible as well? Anatoli P This is an update to my initial answer after I had a chance to play with this myself. Yes, it is possible to

Create Video File using PHP

余生颓废 提交于 2019-11-27 11:48:49
问题 I have scenario for creating a video files using diff. assets like images, audio file. What I want to do is, Find audio files from the particular folder and set it as background music, and fetch images from particular folder and show those images one by one. So basically I have images and audio files and I want create a video file using those assets using PHP. Can any one please suggest the start up point for this? Have done image capture from video and converting the video using Ffmpeg so I

Superimposing two videos onto a static image?

一世执手 提交于 2019-11-27 10:53:39
I have two videos that I'd like to combine into a single video, in which both videos would sit on top of a static background image. (Think something like this .) My requirements are that the software I use is free, that it runs on OSX, and that I don't have to re-encode my videos an excessive number of times. I'd also like to be able to perform this operation from the command line or via script, since I'll be doing it a lot. (But this isn't strictly necessary.) I tried fiddling with ffmpeg for a couple of hours, but it just doesn't seem very well suited for post-processing. I could potentially

MOV to Mp4 video conversion iPhone programmatically

旧城冷巷雨未停 提交于 2019-11-27 06:41:57
I am developing media server for Play station 3 in iPhone. I came to know that PS3 doesn't support .MOV file so I have to convert it into Mp4 or something other transcode which PS3 support. This is what I have done but it crashes if I set different file type than its source files. AVURLAsset *avAsset = [AVURLAsset URLAssetWithURL:videoURL options:nil]; NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:avAsset]; if ([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) { AVAssetExportSession *exportSession = [[AVAssetExportSession alloc

Can someone please explain this Fragment Shader? It is a Chroma Key Filter (Green screen effect)

徘徊边缘 提交于 2019-11-27 06:09:07
问题 I'm trying to understand how this chroma key filter works. Chroma Key, if you don't know, is a green screen effect. Would someone be able to explain how some of these functions work and what they are doing exactly? float maskY = 0.2989 * colorToReplace.r + 0.5866 * colorToReplace.g + 0.1145 * colorToReplace.b; float maskCr = 0.7132 * (colorToReplace.r - maskY); float maskCb = 0.5647 * (colorToReplace.b - maskY); float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 *

How can HTML5 video's byte-range requests (pseudo-streaming) work?

无人久伴 提交于 2019-11-27 05:48:00
问题 If you play an HTML5 video for a video that is hosted on a server that accepts range requests, then when you try to seek ahead to a non-buffered part of the video you'll notice from the network traffic that the browser makes a byte range-request. I'm assuming that the browser computes the byte by knowing the total video size ahead of time and assuming a constant bitrate (if you click half-way in the progress bar, then it will request the byte at the half-way point). But especially if the