video-processing

Getting Bitmap from Video decoded with Nestream AppendBytes (AS3)?

守給你的承諾、 提交于 2019-12-13 23:01:21
问题 I am wondering if someone who has handled NetStream.appendBytes in Flash knows how to get the bitmapData from a decoded video frame? I have already looked at this question but that is from 3 years ago and the more recent comment/answer seems to be gone. In 2014 has anyone managed to turn those bytes into a bitmap? I am working with Flash Player 11.8 and this is not a desktop/AIR app. In the image below I can do steps 1) and 2) but there's a brick wall at step 3) The problem is that simply

Android with OpenCL video processing

…衆ロ難τιáo~ 提交于 2019-12-13 19:07:54
问题 I am making an Android app to test the differences between OpenCL and RenderScript. Right now I'm trying to optimize the video processing code of OpenCL but I can only process the first frame and all others are black. All my functies work with images, and when I recall the init before every video frame and then the edge detection fucntion, all goes well. But it does not seem logic that I need to rebuild the OpenCL program everytime it gets excecuted (when it's the same kernel). I have a

How to get video frames from mp4 video using ffmpeg in android?

我的梦境 提交于 2019-12-13 18:00:21
问题 I have successfully compiled and build ffmpeg library in android after 3,4 days research work. Now I want to grab frames from video. But I don't know which ffmpeg method with command to be called from java class to grab the all frames. Any one have idea about it? or I want to overlay 2 videos. Is there any direct method available in ffmpeg to merge two videos one over another? If yes, how to call it from java class? 回答1: compile ffmpeg.c and invoke its main() via jni. For more details see how

How to create multiple VideoCapture Objects

不羁的心 提交于 2019-12-13 09:28:29
问题 I wanted to create multiple VideoCapture Objects for stitching video from multiple cameras to a single video mashup. for example: I have path for three videos that I wanted to be read using Video Capture object shown below to get the frames from individual videos,so they can be used for writing. Expected:For N number of video paths cap0=cv2.VideoCapture(path1) cap1=cv2.VideoCapture(path2) cap2=cv2.VideoCapture(path3) . . capn=cv2.VideoCapture(path4) similarly I also wanted to create frame

Blinking contour line

只谈情不闲聊 提交于 2019-12-13 06:41:43
问题 My program's objective is to identify the largest contour from a video camera and draw it with red line. I discovered that when the largest contour (aka largest_contours in my program) is detected, the contour's contour line will blink and sometime will interrupt the function to draw a red line around it (because the contour's line is not connected anymore so no more contour detected inside the image). My questions are: What is the reason for this problem to happen? How to avoid (or can we

add small image to large image for oclMat in openCV

旧时模样 提交于 2019-12-13 05:18:41
问题 I have a frame and want to put it in on a bigger image in openCV using openCL type oclMat . But code below gives me black frame result: capture.read(fMat); // frame from camera or video oclMat f; f.upload(fMat); oclMat bf(f.rows*2, f.cols*2, f.ocltype()); // "bf"-big frame oclMat bfRoi = bf(Rect(0, 0, f.cols, f.rows)); f.copyTo(bfRoi); // something wrong here // label 1 bf.download(fMat); Mat bf2; bf.convertTo(bf2, fMat.type()); // this convert affects to nothing imshow("big frame", bf2); So

Opencv: Observe pixel-values within a thresholded ROI, where some values differ from 0 and 255

£可爱£侵袭症+ 提交于 2019-12-13 04:27:06
问题 I have applied a threshold on a Region of Interest in a video frame. This seems to work perfectly, as the result looks like this: I mark the ROI through a mouse callback function and then simply take the threshold via the following two lines of code: ret_thresh, thresh = cv2.threshold(ROI, 80, 255, cv2.THRESH_BINARY) gray_frame[pt1[1]+3:pt1[1]+rect_height, pt1[0]+3:pt1[0]+rect_width] = thresh , where pt1 is the upper left corner of the rectangle, and +3 is only to make sure I take the

Does DirectShow allow one to decode virtually any video based on installed codecs?

倾然丶 夕夏残阳落幕 提交于 2019-12-13 03:29:18
问题 I am comparing VFW, MediaFoundation, and DirectShow.. Although VFW is very old and dated, it at least allows a lot of flexibility in encoding and decoding videos because you can choose virtually any encoder/decoder, AFAIK, and you are not limited to a subset of decoders/encoders that only microsoft has chosen. Does DirectShow offer the ability to decode (decompress) multiple video kinds (like vfw) using any chosen codec, or must you only use a subset that microsoft has chosen? Indeed some api

Android FFmpeg Log showing “ File:// protocol not found”

≡放荡痞女 提交于 2019-12-13 03:08:43
问题 I'm trying to overlay Image on video but FFmpeg log showing File://storage/emulated/0/whatsappCamera/wc1529921459336.jpg: Protocol not found I have also looked in below thread but it's not helped me I'm stuck Please help me!! Android FFmpeg reports “file protocol not found” Here is command String[] commandImage = new String[]{"-ss", "00:00:30.0", "-t", "00:00:10.0", "-i",path, "i","File://storage/emulated/0/whatsappCamera/wc1529921459336.jpg", "-filter_complex", "[0]crop=400:400:0:0[a];[a][1

(MATLAB) Colormap for video/animation

廉价感情. 提交于 2019-12-13 02:23:51
问题 I have some code that produces a series of grayscale images. I am then able to save the images individually in a loop and save them with a colormap applied. i.e. file = sprintf('image_%04d.png',x); imwrite(image1,jet,file,'png'); So i get my images out the other end and they have the correct colormapping which is colormap(jet). However, when in my next program, I try to cobble these images together to form a short animation (yes I know I should just make the movie in the same loop as above),