kinect

Two 3D point cloud transformation matrix

試著忘記壹切 提交于 2019-12-10 10:14:46
问题 I'm trying to guess wich is the rigid transformation matrix between two 3D points clouds. The two points clouds are those ones: keypoints from the kinect (kinect_keypoints). keypoints from a 3D object (box) (object_keypoints). I have tried two options: [1]. Implementation of the algorithm to find rigid transformation. **1.Calculate the centroid of each point cloud.** **2.Center the points according to the centroid.** **3. Calculate the covariance matrix** cvSVD( &_H, _W, _U, _V, CV_SVD_U_T );

detecting finger movement with Microsoft Kinect in c#

霸气de小男生 提交于 2019-12-10 04:35:17
问题 Is it possible to detect finger movements with Kinect. I am able to detect skeleton and do some mouse movement and perform a click based on OTHER HAND location. I would like to implement the 'mouse click' using finger movements. Is it possible with Microsoft Kinect sdk or with the other open source similar projects? Thanks. 回答1: Currently it is only possible by using a hack; there is no official setting or API for it but it is possible to analyze the image data and find the fingers. Have a

Implicitly convertible to 'System.IDisposable' error

跟風遠走 提交于 2019-12-10 01:18:02
问题 This is what I'm trying to do: private KinectAudioSource CreateAudioSource() { var source = KinectSensor.KinectSensors[0].AudioSource; source.NoiseSuppression = _isNoiseSuppressionOn; source.AutomaticGainControlEnabled = _isAutomaticGainOn; return source; } private object lockObj = new object(); private void RecordKinectAudio() { lock (lockObj) { using (var source = CreateAudioSource()) { } } } The 'using' statement gives one error which states: 'Microsoft.Kinect.KinectAudioSource':type used

Measuring distance between 2 points with OpenCV and OpenNI

送分小仙女□ 提交于 2019-12-09 23:47:50
问题 I'm playing with the built in OpenNI access within OpenCV 2.4.0 and I'm trying to measure the distance between two points in the depth map. I've tried this so far: #include "opencv2/core/core.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> using namespace cv; using namespace std; Point startPt(0,0); Point endPt(0,0); void onMouse( int event, int x, int y, int flags, void* ) { if( event == CV_EVENT_LBUTTONUP) startPt = Point(x,y); if(

how to make an executable version of a WPF Kinect Application?

一曲冷凌霜 提交于 2019-12-09 17:35:06
问题 I have made a Kinect Application in Microsoft Visual Studio 2010. I need to make an exe of the application which can run on any windows based system. If I need to do that than is there any requirements that the system should fulfil? and If yes, then how do I do that? I tried to use the exe in application/bin/debug/application.exe by copying it in another folder but it shows an error but if I run the exe from the bin/debug/application.exe it works. Am I missing something here or is it the only

Convert 16-bit-depth CvMat* to 8-bit-depth

社会主义新天地 提交于 2019-12-09 15:52:11
问题 I'm working with Kinect and OpenCV. I already search in this forum but I didn't find anything like my problem. I keep the raw depth data from Kinect (16 bit), I store it in a CvMat* and then I pass it to the cvGetImage to create an IplImage* from it: CvMat* depthMetersMat = cvCreateMat( 480, 640, CV_16UC1 ); [...] cvGetImage(depthMetersMat,temp); But now I need to work on this image in order to do cvThreshdold and find contours. These 2 functions need an 8-bit-depth-image in input. How can I

How to read oni file in Processing 2?

流过昼夜 提交于 2019-12-09 13:53:13
问题 I have a Kinect program in Processing 2 that I would like to test or simulate by passing it saved skeletons from an .oni file rather than taking input from the Kinect. Is it possible to do this, i.e. to get Processing 2 instead of using the Kinect it should read values from the .oni file and produce an output? 回答1: I recommend using the SimpleOpenNI library: import SimpleOpenNI.*; SimpleOpenNI ni; void setup(){ size(640,480); ni = new SimpleOpenNI(this); if(SimpleOpenNI.deviceCount() == 0) ni

How to track head position

本秂侑毒 提交于 2019-12-09 12:56:07
问题 I want to do something similar to what Johhny Lee did in his Wii head tracking http://www.youtube.com/watch?v=Jd3-eiid-Uw&feature=player_embedded But I want to use the Kinect. Since Microsoft's sdk exposes the skeletal joints, I had hoped I might be able to just use that to get the head position. The problem is that I want to do this with my desktop computer and its monitor. If I put the Kinect sensor right next to my monitor and sit at the desk. pretty much just my head and neck are visible

How to track eyes using Kinect SDK?

ぐ巨炮叔叔 提交于 2019-12-09 10:29:10
问题 The requirement is to define a rectangle around each eye in 3D space. There should be a way to track eyes using the Microsoft Kinect SDK. According to this The Face Tracking SDK uses the Kinect coordinate system to output its 3D tracking results. The origin is located at the camera’s optical center (sensor), Z axis is pointing towards a user, Y axis is pointing up. The measurement units are meters for translation and degrees for rotation angles. Adding ... Debug3DShape("OuterCornerOfRightEye"

FREENECT_DEPTH_REGISTERED has no effect with libfreenect

五迷三道 提交于 2019-12-09 06:22:36
问题 I'm playing around with a Kinect (the original Xbox version) on the libfreenect driver (I'm on Ubuntu 12.04 by the way). I have cloned the most recent version from git and installed it manually, as per the instructions here: http://openkinect.org/wiki/Getting_Started#Ubuntu_Manual_Install I would like to access the registered depth values. As far as I understand, the Kinect is factory calibrated, and there is a lookup-table matching depth pixels to the proper RGB pixels. I can open the Kinect