video-processing

Cut videos from Azure Blob Storage

落花浮王杯 提交于 2019-12-06 12:28:36
I have a web app that is hosted in Azure; one of it's functionalities is to be able to make a few cuts from the video(generate 2 or 3 small videos of 5-10 seconds from a larger video). The videos are persisted in Azure Blob Storage. How do you suggest to accomplish this in the Azure environment? The actual cutting of the videos will be initiated by a web job. I'm also concerned about the pricing(within the Azure environment), I'm taking into account the possibility of high traffic. Any feedback is appreciated. Thank you. Assuming you have video-cutting code that operates on files through

ffmpeg extract elementary streams from mp4

不打扰是莪最后的温柔 提交于 2019-12-06 12:05:38
问题 I have successfully ported ffmpeg library to Android by using Bambuser's ffmpeg port. I'm currently investigating the ffmpeg's source codes especially ffplay.c and api-examples.c files. I want to extract elementary streams from Android 2.2 recorded videos. For example I can record a H.263 encoded video in the MPEG-4 container. Lets say; test.mp4 file. What I want to achieve is, I want to extract H.263 elementary stream video from the test.mp4 file something like test.h263. It can be extracted

How to reduce the size of video in Android? [closed]

时光总嘲笑我的痴心妄想 提交于 2019-12-06 11:32:21
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 4 years ago . I am new to Android .I don't know how to compress the video in android . Please, suggest me the options? 回答1: You can try Intel INDE on https://software.intel.com/en-us/intel-inde and Media Pack for Android which is a part of INDE, tutorials on https://software.intel.com/en

Keep alpha-transparency of a video through HDMI

旧城冷巷雨未停 提交于 2019-12-06 10:32:39
The scenario I'm dealing with is actually as follow: I need to get the screen generated by OpenGL and send it through HDMI to a FPGA component while keeping the alpha channel. But right now the data that is being sent through HDMI is only RGB (24bit without alpha channel) So i need a way to force sending the Alpha bits through this port somehow. See image: http://i.imgur.com/hhlcbb9.jpg One solution i could think of is to convert the screen buffer from RGBA mode to RGB while mixing the Alpha channels within the RGB buffer. For example: The original buffer: [R G B A][R G B A][R G B A] The

Motion Vector extraction from encoded video file

限于喜欢 提交于 2019-12-06 09:49:03
问题 I am trying to extract motion vector data from an encoded mp4 file. In a previous post I found an answer http://www.princeton.edu/~jiasic/cos435/motion_vector.c . But I am not able to run the code without errors . What are the other files that have to be included in the file ? I am a newbie here . So any help would be appreciated . 回答1: I had modified the source code of mplayer (ffmpeg) to extract motion vectors for any compressed video, I have uploaded the modified mplayer code which can be

How to input video (frames) into a GLSL shader

人走茶凉 提交于 2019-12-06 07:38:21
I'm trying to do video processing using GLSL. I'm using OpenCV to open a video file up and take each frame as a single image an then I want to use each frame in a GLSL shader What is the best/ideal/smart solution to using video with GLSL? Reading From Video VideoCapture cap("movie.MOV"); Mat image; bool success = cap.read(image); if(!success) { printf("Could not grab a frame\n\7"); exit(0); } Image to Texture GLuint tex; glGenTextures(1, tex); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.cols, image.rows, 0, GL_BGR, GL_UNSIGNED

Color thresholding on an opencv video

房东的猫 提交于 2019-12-06 06:17:43
I am thresholding for a color range in an opencv video. The goal is to seperate the B-mode (black and white, information on location but not velocity) from color-flow doppler mode (velocity infomation) in medical ultrasound videos for an academic project. I have tried to threshold based on an HSV hue range that I have rebuilt from the color scale delivered by the ultrasound machine (light blue [opencv hue 90] to yellow [opencv hue 35]). Unfortunately, the results are not good. Have I made a mistake in the thresholding? Or would there be a another way to achieve the desired results? Below is my

Android video remove chroma key background

白昼怎懂夜的黑 提交于 2019-12-06 05:57:06
I have checked this question. It is very similar: I want to record a video with android camera. After that with a library remove the background, which is with chroma key. First I think I should use android NDK in order to escape from SDK memory limitation and use the whole memory. The length of the video is short, a few seconds so maybe is able to handle it. I would prefer to use an SDK implementation and set the android:largeHeap="true" , because of mismatching the .so files architecture. Any library suggestion for SDK or NDK please. IMO you should prefer NDK based solution, since video

Smoothing motion parameters

安稳与你 提交于 2019-12-06 05:29:22
I have been working on video stabilization for quite a few weeks now. The algorithm I'm following basically involves 3 steps :- 1. FAST feature detection and Matching 2. Calculating affine transformation (scale + rotation + translation x + translation y ) from matched keypoints 3. Smooth motion parameters using cubic spline or b-spline. I have been able to calculate affine transform. But I am stuck at smoothing motion parameters. I have been unable to evaluate spline function to smooth the three parameters. Here is a graph for smoothed data points Any suggestion or help as to how can I code to

How to get previous frame of a video in opencv python

只愿长相守 提交于 2019-12-06 04:24:34
问题 I want to detect obstacles from a video based on their increasing size.To do that first I applied SIFT on gray image to get feature points of current frame. Next to compare the feature points of current frame with the previous frame I want to apply Brute-Force algorithm. For that I want to get feature points in previous frame. How can I access previous frame in opencv python ? and how to avoid accessing previous frame when the current frame is the first frame of the video? below is the code