motion-detection

Seems like my little android app is running multiple instance

落爺英雄遲暮 提交于 2019-12-10 21:34:58
问题 I know this is very lame but I'm totally new to android development. I'm writing a sensor based app which will change wallpaper each time the phone is shaked. Once the app is minimized it runs in background. It works wonderfully when I run it for the first time.But When I minimize it and re-open it looks like 2 instances of the app are running. So it goes on. Every time I minimize and open the app, looks like one more instance is started in parallel. Problem it causes: 1: Multiple instances

detect if the touch stopped on an android device screen

帅比萌擦擦* 提交于 2019-12-10 18:20:05
问题 I want to fire a method after a touch has been initiated and left the screen. I can detect the beginning of a touch by setting a OnTouch event to the View but can't detect when the hand leave the screen. I want to detect when does the touch stops. How can i detect this? 回答1: What you're looking for is http://developer.android.com/reference/android/view/MotionEvent.html - Specifically the use of MotionEvent.ACTION_UP . There is a lot of information on the link I've provided :) A gesture starts

how to code simple motion tracking?

无人久伴 提交于 2019-12-10 11:24:43
问题 I am making a sentry turret with servos and a paintball gun and need to implement real time motion tracing to make the gun shoot anything that moves. how can I code this(any good algorithms, books, tutorials)? I want to make it myself and not use premade solutions. 回答1: Lucas-Kanade with Kalman filtering is the bread and butter motion tracking algorithm. However, it's a bit outdated. 回答2: We (Success Labs) just published a Lucas-Kanade app for the iPhone. You can read about it here - http:/

What is output from OpenCV's Dense optical flow (Farneback) function? How can this be used to build an optical flow map in Python?

穿精又带淫゛_ 提交于 2019-12-09 16:35:51
问题 I am trying to use the output of Opencv's dense optical flow function to draw a quiver plot of the motion vectors but have not been able to find what the function actually outputs. Here is the code: import cv2 import numpy as np cap = cv2.VideoCapture('GOPR1745.avi') ret, frame1 = cap.read() prvs = cv2.cvtColor(frame1,cv2.COLOR_BGR2GRAY) hsv = np.zeros_like(frame1) hsv[...,1] = 255 count=0 while(1): ret, frame2 = cap.read() next = cv2.cvtColor(frame2,cv2.COLOR_BGR2GRAY) flow = cv2

Sum of Absolute differences between images in Matlab

时间秒杀一切 提交于 2019-12-08 03:23:58
问题 I want to implement sum of absolute difference in Matlab to establish a similarity metric between between one video frame and 5 frames either side of this frame (i.e. past and future frames). I only need the SAD value for the co-located pixel in each frame, rather than a full search routine, such as full search. Obviously I could implement this as nested loops such as: bs = 2; % block size for (z_i = -bs:1:bs) for (z_j = -bs:1:bs) I1(1+bs:end-bs,1+bs:end-bs) = F1(1+bs+z_i:end-bs+z_i, 1+bs+z_j

FFMPEG Motion Compensation and Search

主宰稳场 提交于 2019-12-08 02:59:28
I'm trying to modify the motion detection part of FFMPEG. What I want to do is to extend the search space, so that whenever the macroblock hit the right most edge of the frame, I need it to still move the block towards the left-most as if they are connected (in my example videos, the right edge is actually a continue of the left edge). Can someone help me to point where exactly I can modify it within FFMPEG source code or x265, or x264? I took H265 as an example from here . It has a motion.cpp file which nicely specifies the possible block sizes as below. But I can't find the specific loop

webcam motion tracking with Python

柔情痞子 提交于 2019-12-07 13:36:27
Is there a simple way to track the motions of a single entity in a webcam feed? For example, I imagine a "hello world" app with an index finger used as mouse pointer. I realize there's still a lot of basic research in this area, so it might be too early to expect an easy to use, generic abstraction. For the sake of completeness, I've seen some related but lower-level (and non-Python) projects being mentioned, including AForge , WiimoteLib and an article on motion detection algorithms . You might want to take a look at http://opencv.willowgarage.com/wiki/PythonInterface . I'm not sure how hard

how to detect when MotionEvent.ACTION_MOVE is finished

痴心易碎 提交于 2019-12-07 01:32:49
问题 I need to detect in my application when user stop moving across a specific view. I'm creating something similar to marque text in my application which can interact while user is touching the view and moving across it. And I need to start scrolling the view after user lifts his finger. As I notices if I move my finger across the view a few seconds and when I lift my finger the MotionEvent.ACTION_UP is not called. The last event which I capture is ACTION_MOVE . So how can I detect when user

How to extract motion vectors from H.264 AVC CMBlockBufferRef after VTCompressionSessionEncodeFrame

一世执手 提交于 2019-12-06 13:51:53
问题 I'm trying read or understand CMBlockBufferRef representation of H.264 AVC 1/30 frame. The buffer and the encapsulating CMSampleBufferRef is created by using VTCompressionSessionRef . https://gist.github.com/petershine/de5e3d8487f4cfca0a1d H.264 data is represented as AVC memory buffer, CMBlockBufferRef from the compressed sample. Without fully decompressing again , I'm trying to extract motion vectors or predictions from this CMBlockBufferRef . I believe that for the fastest performance,

Motion Vector extraction from encoded video file

限于喜欢 提交于 2019-12-06 09:49:03
问题 I am trying to extract motion vector data from an encoded mp4 file. In a previous post I found an answer http://www.princeton.edu/~jiasic/cos435/motion_vector.c . But I am not able to run the code without errors . What are the other files that have to be included in the file ? I am a newbie here . So any help would be appreciated . 回答1: I had modified the source code of mplayer (ffmpeg) to extract motion vectors for any compressed video, I have uploaded the modified mplayer code which can be