kinect

How to make a control in XAML public in order to be seen in other classes

為{幸葍}努か 提交于 2019-11-29 05:07:00
问题 I'm working in wpf application i made a checkbox in the XAML, then my code calls a function in a class and in this function there is an if condition where its checking on whether the checkbox is checked or not but the checkbox is not seen in this class, so how to do this? Many thanks EDIT: Here is the steps I did: i created the ViewModel class under the same project of KinectSkeleton as shown: ViewModel class: public class ViewModel { public bool IsChecked { get; set; } public bool is_clicked

How to Display a 3D image when we have Depth and rgb Mat's in OpenCV (captured from Kinect)

折月煮酒 提交于 2019-11-29 04:51:55
We captured a 3d Image using Kinect with OpenNI Library and got the rgb and depth images in the form of OpenCV Mat using this code. main() { OpenNI::initialize(); puts( "Kinect initialization..." ); Device device; if ( device.open( openni::ANY_DEVICE ) != 0 ) { puts( "Kinect not found !" ); return -1; } puts( "Kinect opened" ); VideoStream depth, color; color.create( device, SENSOR_COLOR ); color.start(); puts( "Camera ok" ); depth.create( device, SENSOR_DEPTH ); depth.start(); puts( "Depth sensor ok" ); VideoMode paramvideo; paramvideo.setResolution( 640, 480 ); paramvideo.setFps( 30 );

Kinect SDK for finger detection?

孤人 提交于 2019-11-29 04:38:16
问题 I'm a student from Nanyang Technological University (NTU), Singapore. And currently developing a project using Kinect SDK. And my question is, anyone know how to develop a program to detect a finger (or fingertip) using Kinect SDK, or maybe even some possible reference codes. Anyway, I also tried to search on google, but the only reference I got is using Open NI, instead of Kinect SDK. Thanks and Regards 回答1: I was looking into that myself, although haven't gone deep into it. OpenNI has some

Kinect Depth and Image Frames Alignment

人走茶凉 提交于 2019-11-29 04:17:10
I am playing around with new Kinect SDK v1.0.3.190. (other related questions in stackoverflow are on previous sdk of kinect) I get depth and color streams from Kinect. As the depth and RGB streams are captured with different sensors there is a misalignment between two frames as can be seen below. Only RGB Only Depth Depth & RGB I need to align them and there is a function named MapDepthToColorImagePoint exactly for this purpose. However it doesn't seem to work. here is a equally blended (depth and mapped color) result below which is created with the following code Parallel.For(0, this

Otsu thresholding for depth image

偶尔善良 提交于 2019-11-29 01:35:33
I am trying to substract background from depth images acquired with kinect. When I learned what otsu thresholding is I thought that it could with it. Converting the depth image to grayscale i can hopefully apply otsu threshold to binarize the image. However I implemented (tried to implemented) this with OpenCV 2.3, it came in vain. The output image is binarized however, very unexpectedly. I did the thresholding continuously (i.e print the result to screen to analyze for each frame) and saw that for some frames threshold is found to be 160ish and sometimes it is found to be 0. I couldn't quite

Kinect: Converting from RGB Coordinates to Depth Coordinates

£可爱£侵袭症+ 提交于 2019-11-29 00:43:57
I am using the Windows Kinect SDK to obtain depth and RGB images from the sensor. Since the depth image and the RGB images do not align, I would like to find a way of converting the coordinates of the RGB image to that of the depth image, since I want to use an image mask on the depth image I have obtained from some processing on the RGB image. There is already a method for converting depth coordinates to the color space coordinates: NuiImageGetColorPixelCoordinatesFromDepthPixel unfortunately, the reverse does not exist. There is only an arcane call in INUICoordinateMapper: HRESULT

Using Kinect with Emgu CV

こ雲淡風輕ζ 提交于 2019-11-28 22:08:07
With EmguCV, to capture an image from a web-cam we use : Capture cap = new Capture(0); Image < Bgr, byte > nextFrame = cap.QueryFrame(); ... ... But I don't know how to capture images from my Kinect, I have tried kinectCapture class but it didn't work with me. Thanks acandaldev Basically , you need to capture and Image from the ColorStream and convert to a EmguCV Image class : Conversion to EmguCV Image from Windows BitMap (Kinect ColorStream): You have a Windows Bitmap variable, where holds Kinect Frame. Bitmap bmap = new Bitmap(weightFrame,HeightFrame,System.Drawing.Imaging.PixelFormat

3d model construction using multiple images from multiple points (kinect)

痞子三分冷 提交于 2019-11-28 21:36:25
问题 is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch

Kinect for Xbox 360 freezes and disconnects from USB after running Processing SimpleOpenNi depth image example

二次信任 提交于 2019-11-28 19:59:05
please help I've been trying to set up kinect for XBOX 360 to run on ubuntu in order to start developing an application to control a humanoid robot. for the past four days I've been searching , downloading , installing and trying dozens of libraries and drivers to get the kinect to work on Ubuntu. in the beginning none was working and I was only able to read the RGB camera with "Camorama" and "guvcview" no matter what library or driver I attempted to run.. Finally, I installed a fresh copy of Ubuntu and installed libfreenect libraries using synaptic (I'm kinda newbie) and I also installed the

Kinect intrinsic parameters from field of view

こ雲淡風輕ζ 提交于 2019-11-28 19:42:05
Microsoft state that the field of view angles for the Kinect are 43 degrees vertical and 57 horizontal (stated here ) . Given these, can we calculate the intrinsic parameters i.e. focal point and centre of projection? I assume centre of projection can be given as (0,0,0)? Thanks EDIT: some more information on what I'm trying to do I have a dataset of images recorded with a Kinect, I am trying to convert pixel positions (x_screen,y_screen and z_world (in mm)) to real world coordinates. If I know the camera is placed at point (x',y',z') in the real world coordinate system, is it sufficient to