video-processing

How to get Bytes from CMSampleBufferRef , To Send Over Network

那年仲夏 提交于 2019-12-31 07:56:52
问题 Am Captuing video using AVFoundation frame work .With the help of Apple Documentation http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html%23//apple_ref/doc/uid/TP40010188-CH5-SW2 Now i did Following things 1.Created videoCaptureDevice 2.Created AVCaptureDeviceInput and set videoCaptureDevice 3.Created AVCaptureVideoDataOutput and implemented Delegate 4.Created AVCaptureSession - set input as AVCaptureDeviceInput and set

Extract all video frame from mp4 video using OpenCV and C++

ⅰ亾dé卋堺 提交于 2019-12-31 00:45:08
问题 I'm following a tutorial to extract video frames. I've read this question, it doesn't work, also queationfrom Open CV Answer, but the solution is for capturing current frame. I have a 120fps video and want to extract all of them. Here's my code #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <iostream> #include <string> #include <sstream> using namespace cv; using namespace std; int c = 0; string int2str(int &); int main

Overlaying video with ffmpeg

我们两清 提交于 2019-12-31 00:35:07
问题 I'm attempting to write a script that will merge 2 separate video files into 1 wider one, in which both videos play back simultaneously. I have it mostly figured out, but when I view the final output, the video that I'm overlaying is extremely slow. Here's what I'm doing: Expand the left video to the final video dimensions ffmpeg -i left.avi -vf "pad=640:240:0:0:black" left_wide.avi Overlay the right video on top of the left one ffmpeg -i left_wide.avi -vf "movie=right.avi [mv]; [in][mv]

Add text with FFMpeg drawtext at specific time

帅比萌擦擦* 提交于 2019-12-30 13:31:46
问题 I'm adding text to an animated GIF. I would like the text to appear at a specific time, though, and I'm unable to do that. This is what I have: ffmpeg -i image.gif -vf 'drawtext=textfile=/path/to/text.txt:x=0:y=0:fontfile=/path/to/font.ttf:fontsize=64:fontcolor=white:borderw=3:bordercolor=black:box=0' I tried different approaches, but nothing seems to work. I can manipulate timing for the video using things like -itsoffset 00:00:30 , but not the text. 回答1: You have to use timeline editing.

How to find object on video using OpenCV

半城伤御伤魂 提交于 2019-12-30 06:16:11
问题 To track object on video frame, first of all I extract image frames from video and save those images to a folder. Then I am supposed to process those images to find an object. Actually I do not know if this is a practical thing, because all the algorithm did this for one step. Is this correct? 回答1: Well, your approach will consume a lot of space on your disk depending on the size of the video and the size of the frames, plus you will spend a considerable amount of time reading frames from the

FFmpeg - Overlay one video onto another video?

北城以北 提交于 2019-12-29 01:34:28
问题 I understand that this is a very open ended question. I have done some initial reading into FFmpeg, but now require some guidance. Problem I have a video input.mov . I would like to overlay another video on top of overlay.wov . The result should be a single video ( output.mov ). Notes Done some initial reading into FFmpeg and read this question. Thanks - C. Edits Backend is Go/Ruby. Open to using a new language. The audio from the first video should be kept. Setting the interval at which the

Libraries / tutorials for manipulating video in java [closed]

夙愿已清 提交于 2019-12-28 13:49:10
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . I need to do some simple video editing within a Java application, but the frameworks I've found (JMF and FMJ) appear to be quite stale

How can I mux a MKV and MKA file and get it to play in a browser?

只谈情不闲聊 提交于 2019-12-25 09:34:44
问题 I'm using ffmpeg to merge .mkv and .mka files into .mp4 files. My current command looks like this: ffmpeg -i video.mkv -i audio.mka output_path.mp4 The audio and video files are pre-signed urls from Amazon S3. Even on a server with sufficient resources, this process is going very slowly. I've researched situations where you can tell ffmpeg to skip re-encoding each frame, but I think that in my situation it actually does need to re-encode each frame. I've downloaded 2 sample files to my

How can I stream mjpeg file as rtsp

て烟熏妆下的殇ゞ 提交于 2019-12-25 07:24:10
问题 We have an mjpeg video, obtained from the webcam and stored into *.avi file, still encoded as mjpeg. We need to restream this file as rtsp (and stil preserve the mjpeg there, i.e. no decoding). The goal is to emulate the webcam this video was obtained from for the software that processes the video. The file can be open with vlc/ffplay with no problems. The ffmpeg behaves like it is streaming it, however, ffplay/vlc can't open this stream. We tried to stream if with gstreamer. 1) we fount no

Converting sequenced frames to video

我的未来我决定 提交于 2019-12-25 06:29:28
问题 I'm trying to convert images to video, but it wasn't getting the right sequence, so I'm using glob to organize it. After this I was getting erros, and then I reduced my code to this: import re import glob import cv2 numbers = re.compile(r'(\d+)') def numericalSort(value): parts = numbers.split(value) parts[1::2] = map(int, parts[1::2]) return parts for infile in sorted(glob.glob('*.jpg'), key=numericalSort): img1 = cv2.imread(infile) cv2.imshow('image', img1) exit() And for some reason it isn