video-processing

Adding Image Overlay OpenCV for Android

微笑、不失礼 提交于 2019-12-23 00:08:33
问题 I am looking for a way to overlay an image in openCV(2.4.3) for Android. Basically, I am doing some image filtering, and I want the user to be able to see the cleanly filtered video stream, but in the upper hand corner see a preview of what the filter is doing: I have tried setting an ROI on the filtered image, and the using AddWeighted in JNI code like this In Activity: liveFrame.copyTo(mRgba); //going to be used as the unfiltered stream //image filtering happends here... Rect roi = new Rect

Color thresholding on an opencv video

≯℡__Kan透↙ 提交于 2019-12-22 17:49:48
问题 I am thresholding for a color range in an opencv video. The goal is to seperate the B-mode (black and white, information on location but not velocity) from color-flow doppler mode (velocity infomation) in medical ultrasound videos for an academic project. I have tried to threshold based on an HSV hue range that I have rebuilt from the color scale delivered by the ultrasound machine (light blue [opencv hue 90] to yellow [opencv hue 35]). Unfortunately, the results are not good. Have I made a

Frame from video is upside down after extracting

走远了吗. 提交于 2019-12-22 12:16:14
问题 My problem here is that when I extracting a video into a frame using opencv, sometimes the frame that I get will flip up which happened to me for both my machine(window) and VM(ubuntu) But some of the video I tested, frames are not flip. So, I wonder what factor or what should be changed/added in my code to make the extract fixed without a flip def extract_frame(video,folder): global fps os.mkdir('./green_frame/{folder}/'.format(folder=folder)) vidcap = cv2.VideoCapture(video) success,image =

Cut videos from Azure Blob Storage

佐手、 提交于 2019-12-22 11:28:28
问题 I have a web app that is hosted in Azure; one of it's functionalities is to be able to make a few cuts from the video(generate 2 or 3 small videos of 5-10 seconds from a larger video). The videos are persisted in Azure Blob Storage. How do you suggest to accomplish this in the Azure environment? The actual cutting of the videos will be initiated by a web job. I'm also concerned about the pricing(within the Azure environment), I'm taking into account the possibility of high traffic. Any

iOS Determine Number of Frames in Video

倾然丶 夕夏残阳落幕 提交于 2019-12-22 08:34:05
问题 If I have a MPMoviePlayerController in Swift: MPMoviePlayerController mp = MPMoviePlayerController(contentURL: url) Is there a way I can get the number of frames within the video located at url ? If not, is there some other way to determine the frame count? 回答1: I don't think MPMoviePlayerController can help you. Use an AVAssetReader and count the number of CMSampleBuffer s it returns to you. You can configure it to not even decode the frames, effectively parsing the file, so it should be

video editing library in IOS

非 Y 不嫁゛ 提交于 2019-12-22 06:36:04
问题 I am looking for a video editing library in IOS. Editing tasks: -> Adding text or marks on top of the video frame. In my application user should be able to select a video from video library and also play the video in a movie player. And user able to pause the video then add some text or marks using free hand. we need to merge added text and marks in this video. 回答1: AVFoundation will do all that and more. The user interface for the app is, of course, up to you. But all the video manipulation

Opengl es 2.0 draw bitmap overlay on video

六眼飞鱼酱① 提交于 2019-12-22 05:14:42
问题 I am trying to draw a bitmap as an overlay on every frame of the video. I found an example on how to decode and encode a video and it is working. This example has a TextureRenderer class with a drawFrame function that I need to modify in order to add the bitmap. I am newbie to opengl but I learnt that I need to create a texture with the bitmap and bind it. I tried that in the following code but it is throwing an exception. /* * Copyright (C) 2013 The Android Open Source Project * * Licensed

OpenCV doesn't report accurate frame rate/count

不想你离开。 提交于 2019-12-22 02:16:11
问题 I have a 33 second video that I'm trying to process with OpenCV. My goal is to determine what instance in time (relative to the start of the video) each frame corresponds to. I'm doing this in order to be able to compare frames from videos of the same scene that have been recorded at different frame rates. What's working: The FPS is correctly reported as 59.75. This is consistent with what ffprobe reports, so I'm happy to believe that's correct. The problems I'm having are: CAP_PROP_POS_MSEC

Capture video from vlc command line with a stop time

爱⌒轻易说出口 提交于 2019-12-21 23:06:03
问题 I'm trying to capture a video from an RPT stream to my pc (Ubuntu 12-04 LTS). I'm using vlc from command line as below: cvlc -vvv rtp://address:port --start-time=00 --stop-time=300 --sout file/ts:test.ts but vlc ignores the command --stop-time and it continues to download video even more than 300 seconds as specified. Does anyone know the reason for this? and a possible solution? Thanks 回答1: If you know the start-time and the end-time you can compute the record time. You can afterward use the

Android: using OpenCV VideoCapture in service

穿精又带淫゛_ 提交于 2019-12-21 20:20:36
问题 I'm using a service that is started when the Android device boots. This is because I don't need a visible activity. Works fine so far. But now I'm trying to open the camera (in MyService.onStart) and do some basic image processing. I understood that the default Android camera class needs a surface for video preview. That's why I want to use the VideoCapture from OpenCV. But I get this error: No implementation found for native Lorg/opencv/highgui/VideoCapture;.n_VideoCapture:(I)J I'm wondering