kinect

2D-3D homography matrix estimation

折月煮酒 提交于 2019-12-04 05:59:30
I am working with my Kinect on some 2D 3D image processing. Here is my problem: I have points in 3D (x,y,z) which lie on a plane. I also know the coordinates of the points on the RGB image (x,y). Now I want to estimate a 2D-3D homography matrix to estimate the (x1,y1,z1) coordinates to a random (x1,y1) point. I think that is possible, but I don't know where to start. Thanks! What you're looking for is a camera projection matrix , not a homography . A homography maps a plane seen from a camera to the same plane seen from another. For estimating the camera matrix, look up solutions to solving

How can i pass kinect tracking into another form

北城以北 提交于 2019-12-04 05:48:02
问题 I have a kinect project in wpf and it uses skeleton stream that tracks the left and right hand of its users and allows me to hover over buttons. I tried making a new form and just copying and pasting everything so i can create a new page but it didnt work, i think i may have to reference the methods used in the main page, but i am unsure. I want to be able to use the skeleton stream alongside the hovering method in a new window Any help would be appreciated - i apologize if this does not make

how to make an executable version of a WPF Kinect Application?

牧云@^-^@ 提交于 2019-12-04 04:32:20
I have made a Kinect Application in Microsoft Visual Studio 2010. I need to make an exe of the application which can run on any windows based system. If I need to do that than is there any requirements that the system should fulfil? and If yes, then how do I do that? I tried to use the exe in application/bin/debug/application.exe by copying it in another folder but it shows an error but if I run the exe from the bin/debug/application.exe it works. Am I missing something here or is it the only way to do that? "Any Windows based system" isn't going to work. Assuming you're using the Kinect SDK,

Rendering Kinect Point Cloud with Vertex Buffer Object (VBO)

最后都变了- 提交于 2019-12-04 03:35:13
I´m trying to make a dynamic point cloud visualizer. The points are updated every frame with Kinect Sensor. To grab the frames I´m using OpenCV and GLUT to display. The OpenCV API returns a 640 x 480 (float *), for the points xyz position , and a 640 x 480 (int *) for the rgb color data. To get the maximum performance, I´m trying to use Vertex Buffer Object in stream mode instead of a simple Vertex Array. I´m being able to render it with Vertex Array, but nothing is being rendered with my VBO implementation. I tryied a bunch of different orders in the declarations, but i can't find what I'm

smoothing mouse movement

眉间皱痕 提交于 2019-12-04 03:31:59
问题 I'm developing a software to move the mouse based on certain coordinates which i get from a depth image from kinect. but I have 30 frames/second(images/second) and those coordinates changes with every frame so the mouse keeps moving. My question is,Is there a way to smooth the movement of the mouse ? 回答1: Yes you can start tracking with some parameters that allows you to make move smoother. Below is an example code: var parameters = new TransformSmoothParameters { Smoothing = 0.2f, Correction

Convert 16-bit-depth CvMat* to 8-bit-depth

老子叫甜甜 提交于 2019-12-04 02:18:23
I'm working with Kinect and OpenCV. I already search in this forum but I didn't find anything like my problem. I keep the raw depth data from Kinect (16 bit), I store it in a CvMat* and then I pass it to the cvGetImage to create an IplImage* from it: CvMat* depthMetersMat = cvCreateMat( 480, 640, CV_16UC1 ); [...] cvGetImage(depthMetersMat,temp); But now I need to work on this image in order to do cvThreshdold and find contours. These 2 functions need an 8-bit-depth-image in input. How can I convert the CvMat* depthMetersMat in an 8-bit-depth-CvMat* ? The answer that @SSteve gave almost did

Microsoft Kinect SDK Depth calibration

萝らか妹 提交于 2019-12-03 22:44:36
问题 I am working on a 3D model reconstruction application with Kinect sensor. I use Microsoft SDK to get depth data, I want to calculate the location of each point in the real-world. I have read several articles about it and I have implemented several depth-calibration methods but all of them do not work in my application. the closest calibration was http://openkinect.org/wiki/Imaging_Information but my result in Meshlab was not acceptable. I calculate depth value by this method: private double

Get Depth at Color position, Kinect SDK

泪湿孤枕 提交于 2019-12-03 21:15:31
I am looking for way to (as quick as possible) get the corresponding depth to a color pixel from the Kinect camera. I have found the MapDepthFrameToColorFrame function. But that only gives me the color at a certain depth position, I want the opposite. The reason I want this is that I will be able to click on a position at the RGB image and get the position to that pixel. Is there a way to do this faster than looping through all results from MapDepthFrameToColorFrame? The problem here is that not every color pixel will have a depth assigned to it because of the way the cameras and IR emitter

Kinect: How to get the skeleton data from some depth data( geting from kinect but i modified some place)

霸气de小男生 提交于 2019-12-03 21:03:51
问题 I could get the depth frame from my Kinect and then modify data in the frame. Now I want to use the modified depth frame to get the skeleton data. How can I do it? 回答1: well, I find there's no way to do this with microsoft kinect sdks. Now, I find its ok to use OpenNI, an open sourse API by Primesense. 来源: https://stackoverflow.com/questions/12155062/kinect-how-to-get-the-skeleton-data-from-some-depth-data-geting-from-kinect-bu

How to read oni file in Processing 2?

徘徊边缘 提交于 2019-12-03 21:02:05
I have a Kinect program in Processing 2 that I would like to test or simulate by passing it saved skeletons from an .oni file rather than taking input from the Kinect. Is it possible to do this, i.e. to get Processing 2 instead of using the Kinect it should read values from the .oni file and produce an output? I recommend using the SimpleOpenNI library: import SimpleOpenNI.*; SimpleOpenNI ni; void setup(){ size(640,480); ni = new SimpleOpenNI(this); if(SimpleOpenNI.deviceCount() == 0) ni.openFileRecording("/path/to/yourRecording.oni"); ni.enableDepth(); } void draw(){ ni.update(); image(ni