video-processing

Accessing & Manipulating video frames from .mp4 file in Windows Phone 7 app

荒凉一梦 提交于 2019-12-10 17:22:36
问题 As you may know, when you record a video on a windows phone, it is saved as a .mp4. I want to be able to access the video file (even if it's only stored in isolated storage for the app), and manipulate the pixel values for each frame. I can't find anything that allows me to load a .mp4 into an app, then access the frames. I want to be able to save the manipulated video as .mp4 file as well, or be able to share it. Has anyone figured out a good set of steps to do this? My guess was to first

Number and text detection possible with opencv android while video capture?

旧城冷巷雨未停 提交于 2019-12-10 17:17:15
问题 Is it possible to detect numbers or text while capturing video in android using opencv or any other best image& video processing api's 回答1: You will need a capable OCR engine which will try to detect characters by any number of means. Tesseract is a good open source engine which will try to 'parse' your image for characters by masking. However, there are several steps or approaches you need to take before you feed your OCR engine(Tesseract) your image. In order to ensure more accurate results

Making video load faster in videoview

三世轮回 提交于 2019-12-10 16:19:05
问题 I play a video in a videoview from an URL...everything works fine and even the video plays But the only problem is that the video takes almost 10 seconds to start playing which might be kind of annoying to the user I have tried different URLs and its the same, the videos are 360p and 6sec long Is it the default media player that is slow? I have the stack overflow but could not find a suitable answer and ever searched for various 3 rd party videos libraries but could not find one Even tried

Post processing in ffmpeg to move 'moov atom' in MP4 files (qt-faststart)

两盒软妹~` 提交于 2019-12-10 16:15:01
问题 Is it possible to run ffmpeg from the command line which will either place the 'moov atom' metadata in the beginning of the MP4 file or run the qt-faststart as a post processing operation in ffmpeg so the generated file is stream-able through the internet? I can of course run it as a separate command, but would prefer it to be something as an option within ffmpeg, or as part of a post conversion, command line option when converting the video files via ffmpeg Edit 1 http://ffmpeg.org/ffmpeg

Drawtext, drawbox or overlay on only a single frame using FFmpeg

喜欢而已 提交于 2019-12-10 13:57:45
问题 I'm using the drawtext and drawbox avfilters on FFmpeg, two of the most poorly documented functions known to man. I'm struggling to work out if and how I can use them on only a single frame, i.e., drawtext on frame 22. Current command: ffmpeg -i test.wmv -y -b 800k -f flv -vcodec libx264 -vpre default -s 768x432 \ -g 250 -vf drawtext="fontfile=/home/Cyberbit.ttf:fontsize=24:text=testical:\ fontcolor=green:x=100:y=200" -qscale 8 -acodec libfaac -sn -vstats out.flv Two elements mentioned in the

PHP Video Editing and Streaming

柔情痞子 提交于 2019-12-10 12:06:25
问题 I am developing online video streaming website on PHP. I need two functionalities: Need to add title/text at bottom of the video dynamically. Need to add background music to video dynamically. Is it possible with PHP or any available open source library? Can anyone guide me or provide links to this type of library ? Thanks. 回答1: Editing video with PHP is an extremely bad idea. This idea very closely approximates impossible. At best you would need to decode the video which would be brutally

Keep alpha-transparency of a video through HDMI

我的梦境 提交于 2019-12-10 11:27:19
问题 The scenario I'm dealing with is actually as follow: I need to get the screen generated by OpenGL and send it through HDMI to a FPGA component while keeping the alpha channel. But right now the data that is being sent through HDMI is only RGB (24bit without alpha channel) So i need a way to force sending the Alpha bits through this port somehow. See image: http://i.imgur.com/hhlcbb9.jpg One solution i could think of is to convert the screen buffer from RGBA mode to RGB while mixing the Alpha

Smoothing motion parameters

末鹿安然 提交于 2019-12-10 10:51:02
问题 I have been working on video stabilization for quite a few weeks now. The algorithm I'm following basically involves 3 steps :- 1. FAST feature detection and Matching 2. Calculating affine transformation (scale + rotation + translation x + translation y ) from matched keypoints 3. Smooth motion parameters using cubic spline or b-spline. I have been able to calculate affine transform. But I am stuck at smoothing motion parameters. I have been unable to evaluate spline function to smooth the

Deinterlacing in ffmpeg

泄露秘密 提交于 2019-12-10 10:31:28
问题 I've followed the tutorial here to load video files into a C program. But the frames aren't deinterlaced. From what I've seen, the ffmpeg executable supports a -deinterlace switch. How to I do this in code? What library/functions should I read about? 回答1: You have to manually call avpicture_deinterlace to deinterlace each decoded frame. The function definition can be found here. It will basically look like this (using the variables from the first page of the tutorial): avcodec_decode_video

ffmpeg: make a copy from a decoded frame (AVFrame)

家住魔仙堡 提交于 2019-12-10 10:28:33
问题 I want to make a backup frame (AVFrame) from a special frame(let's say pic ). So, I have written AVFrame* bkf = avcodec_alloc_frame(); memcpy(bkf,pic,sizeof(AVFrame)); bkf->extended_data = pic->extended_data; bkf->linesize[0] = pic->linesize[0]; memcpy(bkf->data, pic->data, sizeof(pic->data)); bkf->reordered_opaque = pic->reordered_opaque; bkf->sample_rate = pic->sample_rate; bkf->channel_layout = pic->channel_layout; bkf->pkt_pts = pic->pkt_pts; bkf->pkt_pos = pic->pkt_pos; bkf->width = pic-