kinect

Kinect: How do i ID the first tracked skeleton and do stuff with it after

断了今生、忘了曾经 提交于 2019-12-20 06:29:35
问题 How can i ID the first skeleton the kinect tracked and then do stuff with it. Im only interested in the first skeleton and whichever comes after i do not need them. Preferably the next skeleton that comes in is not tracked at all. Can someone help me with this thanks. Currently the code below im using does not work. I have tried some quick linq query but im not very sure how to use it. Always having errors with it. Can someone give me some examples i can work with thanks in advance!! private

Hand over button event in Kinect SDK 1.7

为君一笑 提交于 2019-12-20 04:55:38
问题 I am creating a WPF application using Kinect SDK 1.7 and I need to count how many times user place hand over the button (not push, just place over). I found only an event responsible for pushing the button in XAML <k:KinectTileButton Label="Click" Click="PushButtonEvent"></k:KinectTileButton> I can't find which event is responsible for placing hand over the button (if this event exists). Maybe you've got some idea which event would do that? Or how to resolve this problem in another way? 回答1:

Transform point cloud's coordinates to another coordinates in Point Cloud Library, which makes the ground plane as the X-O-Y plane?

拥有回忆 提交于 2019-12-20 04:17:21
问题 I have a point cloud from kinect fusion and use Point Cloud Library to segment the ground plane(a x+b y+c*z+d=0) successfully(I got the a,b,c,d in pcl::ModelCoefficients of the ground plane). Now I need to transform the Cartesian coordinates to new Cartesian coordinates that makes the ground plane became the X-O-Y plane(0*x+0*y+z=0). I guess I can do it by this API(but I don't know how): http://docs.pointclouds.org/trunk/group__common.html#transformPointCloud My Answer : Look at this PCL api

体感Kinect结合Unity3D引擎开发虚拟现实AR

天大地大妈咪最大 提交于 2019-12-19 17:12:12
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 2015 年的第一场源创会 ,由 @爱吃鱼的猫大哥 做了一篇 虚拟现实—我们能做的其实很多的猪蹄, 主要介绍了 增强现实框架-MetaioSDK开发包, 我想大家对于这个主题肯定吃的还不够过瘾,那么今天就在来点猛料 ,满足大家欲望! 由于公司业务需要,我也用这个sdk做过Android上一个虚拟现实app小例子,毕竟收费,有一些功能不付费体验不到,在加上后面跟上需要做一个大型户外体验的虚拟项目,综合考虑选择微软体感Kinect结合Unity3D引擎开发虚拟现实AR! @红薯 是不是后面考虑让我参加一期嘉宾演讲了,哈哈!废话不多扯,直接上干货! 首先上好猪蹄和各种佐料: 大概的流程:Kinect打开之后运行,unity通过绑定的脚本会首先获取RGB流和Depth流,以及骨骼数据(主要为识别手势和人多位置做准),其次通过脚本获取各种手势,以及位置,比如SwipeRight没右手挥动一次,就会切换一个例子场景直接与实时拍摄的场景完美融合(效果图稍后呈上),包括RaiseRightHand会出现流星撞地面,感觉像世界末日的感觉。最后就是同时利用unity的GuiTexture实时显示RGB流!说了那么多先上一张效果图感受下激情摧毁办公室: 好了,材料准备好了,该看的都看了,接下来重点来了

Kinect结合Unity3D引擎开发体感游戏(一)

浪子不回头ぞ 提交于 2019-12-19 16:41:59
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 最近公司项目需要做科技馆的体感游戏,以前也没接触过游戏,虽然以前自己也是想做游戏,我想大部分都是学编程出来来做游戏,哈哈(请允许我淫笑一下,终于可以做这方面)。虽然以前没接触过体感游戏,看到的体验也是大部分看到的视频,幸好现在网络还是比较发达,上网大概了体感游戏开发,目前比较好的是Unity3D和Kinect结合交互进行开发。现在心里的感觉用句话说:哥现在也走在世界技术最前言,TMD碉堡了! 9月7号开始查阅网上的资料,一直没有找到详尽而又简单的方案。网上目前我看到就三种方法,说是三种方法说白了就一种:中间件。形散而神不散--散文的最高境界(其实我语文不好,对不起祖国气我的期望,呜呜、、、)三种方法如下: 1.卡耐基梅隆的kinectWrapper.unitypackage; 2.OpenNI官方提供的OpenNI_Unity_Toolkit-0.9.7.4.unitypackage(现在官方不提供更新与支持了,支持的版本unity3.4,在往上版本会出现很多问题,需要修改很多地方,比较麻烦); 3.自己写的交互的中间件,我在网上看见高手自己的写的中间件,封装为DLL,现在通过这段时间动手实现,发现自己写一个这样的中间件其实也不难,后期也自己准备写一个。 我在采用的第一种方法,原因上面三点大概说来了

What is the best algorithm for static posture recognition with Kinect skeletal joints?

孤街浪徒 提交于 2019-12-19 10:14:22
问题 Do you know any robust way of recognizing a static posture? I have tried saving every joint position with a given interval Xmax, Xmin, Ymax, Ymin, Zmax, Zmin and then try to see if 20 joints are within the given intervals, but it does not work well at all. After this I have tried with relative coordinates to the parent joint, but again... it does not work... I don't know how to do this... Anyone who did this? I refer here only to static postures, not dynamic ones. 回答1: You can try by defining

I can't execute a project on Visual Studio 2012

邮差的信 提交于 2019-12-19 05:01:22
问题 I am doing this Robosapien Kinect project in C# ( http://www.youtube.com/watch?v=TKpO5F8LsCk ) and I zipped the code source form here https://github.com/fatihboy/Robosapien and I don't know why that when I open the KinectRopsapien project with Visual Studio 2012 and I run and debug the MainWindow.xaml.cs window, the window that should show what the Kinect is filming is not opening and there is blue bar on the bottom saying "Ready". I have Kinect for Windows SDK 1.7 installed on my computer.

I can't execute a project on Visual Studio 2012

那年仲夏 提交于 2019-12-19 05:00:59
问题 I am doing this Robosapien Kinect project in C# ( http://www.youtube.com/watch?v=TKpO5F8LsCk ) and I zipped the code source form here https://github.com/fatihboy/Robosapien and I don't know why that when I open the KinectRopsapien project with Visual Studio 2012 and I run and debug the MainWindow.xaml.cs window, the window that should show what the Kinect is filming is not opening and there is blue bar on the bottom saying "Ready". I have Kinect for Windows SDK 1.7 installed on my computer.

Aligning captured depth and rgb images

 ̄綄美尐妖づ 提交于 2019-12-19 04:45:31
问题 There has been previous questions (here, here and here) related to my question, however my question has a different aspect to it, which I have not seen in any of the previously asked questions. I have acquired a dataset for my research using Kinect Depth sensor. This dataset is in the format of .png images for both depth and rgb stream at a specific instant. To give you more idea below are the frames: EDIT: I am adding the edge detection output here. Sobel Edge detection output for: RGB Image

Where can I learn/find examples of gesture recognitions streamed from Kinect, using OpenCV?

…衆ロ難τιáo~ 提交于 2019-12-18 11:59:26
问题 I have Kinect and drivers for Windows and MacOSX. Are there any examples of gesture recognitions streamed from Kinect using OpenCV API? I'm trying to achieve similar to DaVinci prototype on Xbox Kinect but in Windows and MacOSX. 回答1: I think it wont be this simple mainly because the depth image data from kinect is not so sensitive. So after a distance of 1m to 1.5m all the fingers will be merged and hence you wont be able to get a clear contours to detect the fingers 回答2: The demo from your