kinect

How to align kinect's depth image with color image

妖精的绣舞 提交于 2019-12-02 22:22:35
The image produced by the color and depth sensor on the Kinect are slightly out of alignment. How can I transform them to make them line up? The key to this is the call to 'Runtime.NuiCamera.GetColorPixelCoordinatesFromDepthPixel' Here is an extension method for the Runtime class. It returns a WriteableBitmap object. This WriteableBitmap is automatically updated as new frames come in. So the usage of it is really simple: kinect = new Runtime(); kinect.Initialize(RuntimeOptions.UseColor | RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseDepthAndPlayerIndex); kinect.DepthStream.Open

Access the value of button and checkbox of an XAML in other another class in a wpf c# application

我怕爱的太早我们不能终老 提交于 2019-12-02 19:17:32
问题 I'm working on a WPF Kinect project. It's one of the developer toolkit samples of Windows Kinect and it's called "Kinect Explorer". You can download it from the Kinect Developer Toolkit SDK ver 1.5. In kinectwindow.xaml I added a button and a checkbox. Also, there is a class called kinectskeleton.cs in which I created two DataTables and a boolean variables. The first DataTable is filled in the OnRender function while the other is empty. The boolean variable is set by default to false. So,

Facial Recognition with Kinect

只愿长相守 提交于 2019-12-02 18:32:09
Lately I have been working on trying facial recognition with the Kinect, using the new Developer Toolkit (v1.5.1). The API for the FaceTracking tools can be found here: http://msdn.microsoft.com/en-us/library/jj130970.aspx . Basically what I have tried to do so far is attain a "facial signature" unique to each person. To do this, I referenced these facial points the Kinect tracks: ( ) . Then I tracked my face (plus a couple friends) and calculated the distance between points 39 and 8 using basic algebra. I also attained the values for the current depth of the head. Heres a sample of the data I

Microsoft Kinect SDK depth data to real world coordinates

人盡茶涼 提交于 2019-12-02 18:23:16
I'm using the Microsoft Kinect SDK to get the depth and color information from a Kinect and then convert that information into a point cloud. I need the depth information to be in real world coordinates with the centre of the camera as the origin. I've seen a number of conversion functions but these are apparently for OpenNI and non-Microsoft drivers. I've read that the depth information coming from the Kinect is already in millimetres, and is contained in the 11bits... or something. How do I convert this bit information into real world coordinates that I can use? Thanks in advance! This is

What is the difference between OpenNI and OpenKinect?

核能气质少年 提交于 2019-12-02 17:09:33
I am considering using Kinect in one my projects, but I am totally lost between all the libraries. Don't know what is what exactly. Most importantly I am reading stuff about OpenNI and OpenKinect. But don't know their relation/differences. PS. I am using Ubuntu or Mac. OpenKinect is a community of people, not a library. The OpenKinect community releases the libfreenect Kinect driver. libfreenect and OpenNI+SensorKinect are two competing, opensource libraries/drivers. libfreenect (Apache 2.0 or GPLv2) derives from the initial, reverse-engineered/hacked Kinect driver whereas OpenNI+SensorKinect

mapping an ellipse to a joint in kinect sdk 1.5

只愿长相守 提交于 2019-12-02 13:02:45
i want to map an ellipse to the hand joint.And the ellipse has to move as my hand joint will move. Please provide me some reference links that help me in doing programs using kinect Sdk 1.5. Thank you Although what @Heisenbug would work, there is a much simpler way in WPF. You can find a tutorial on it at Channel 9's Skeleton Fundamentals . Basically you need a Canvas, and however many ellipses you want. Here is the code XAML <Window x:Class="SkeletalTracking.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

Access the value of button and checkbox of an XAML in other another class in a wpf c# application

隐身守侯 提交于 2019-12-02 11:47:40
I'm working on a WPF Kinect project. It's one of the developer toolkit samples of Windows Kinect and it's called "Kinect Explorer". You can download it from the Kinect Developer Toolkit SDK ver 1.5. In kinectwindow.xaml I added a button and a checkbox. Also, there is a class called kinectskeleton.cs in which I created two DataTables and a boolean variables. The first DataTable is filled in the OnRender function while the other is empty. The boolean variable is set by default to false. So, what I want is when the button in the kinectwindow.xaml.cs is pressed the latest data in the filled

how to use skeletal joint to act as cursor using bounds (No gestures)

核能气质少年 提交于 2019-12-02 10:35:45
I just want to be able to do something when my skeletal joint (x,y,z) coordinates are over the x,y,z coordinates of the button . . I have the following code but somehow it doesnt work properly . .as soon as my hand moves it will do something without my hand reaching the button if (skeletonFrame != null) { //int skeletonSlot = 0; Skeleton[] skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength]; skeletonFrame.CopySkeletonDataTo(skeletonData); Skeleton playerSkeleton = (from s in skeletonData where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault(); if

Kinect数据

删除回忆录丶 提交于 2019-12-02 10:33:17
原文链接 Kinect V1 和 V2 比较 Kinect V1 和 V2 的外观比较 Kinect V1 和 V2 的参数比较 Kinect V1 和 V2 随距离增加的误差分布 Kinect V1 和 V2 颜色误差分布 Kinect V2 在边界处有飞点 参考文献:Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision - ACCV2016 Kinect数据如何处理?精度很低吗 Kinect数据属于低精度的深度视频数据,两个特点:一个是精度低,一个是视频数据。可以应用KinectFusion技术把连续的K(比如K取30)帧数据融合到一块,作为一帧数据用于后续处理。如下图所示,左图是一帧数据,右图是取了连续的30帧数据融合到一块的数据。 KinectFusion是什么 KinectFusion的详细介绍可以参考专题 KinectFusion介绍 为什么需要全局注册 Kinect数据精度不高,在扫描一些大物体的时候,注册误差会累积得很厉害。如下图所示,第一列截取了三对ICP注册后的点云,局部来看注册得很好。然后逐对点云两两ICP注册,如中图所示,点云注册的效果并不好,累计误差很严重。第三列是点云经过全局注册后效果,注册误差被分散到每一帧中去了,从而减少整体的注册误差。 来源:

Kinect数据

浪尽此生 提交于 2019-12-02 10:30:25
原文链接 Kinect V1 和 V2 比较 Kinect V1 和 V2 的外观比较 Kinect V1 和 V2 的参数比较 Kinect V1 和 V2 随距离增加的误差分布 Kinect V1 和 V2 颜色误差分布 Kinect V2 在边界处有飞点 参考文献:Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision - ACCV2016 Kinect数据如何处理?精度很低吗 Kinect数据属于低精度的深度视频数据,两个特点:一个是精度低,一个是视频数据。可以应用KinectFusion技术把连续的K(比如K取30)帧数据融合到一块,作为一帧数据用于后续处理。如下图所示,左图是一帧数据,右图是取了连续的30帧数据融合到一块的数据。 KinectFusion是什么 KinectFusion的详细介绍可以参考专题 KinectFusion介绍 为什么需要全局注册 Kinect数据精度不高,在扫描一些大物体的时候,注册误差会累积得很厉害。如下图所示,第一列截取了三对ICP注册后的点云,局部来看注册得很好。然后逐对点云两两ICP注册,如中图所示,点云注册的效果并不好,累计误差很严重。第三列是点云经过全局注册后效果,注册误差被分散到每一帧中去了,从而减少整体的注册误差。 来源: