kinect

I can't execute a project on Visual Studio 2012

别等时光非礼了梦想. 提交于 2019-12-01 02:01:51
I am doing this Robosapien Kinect project in C# ( http://www.youtube.com/watch?v=TKpO5F8LsCk ) and I zipped the code source form here https://github.com/fatihboy/Robosapien and I don't know why that when I open the KinectRopsapien project with Visual Studio 2012 and I run and debug the MainWindow.xaml.cs window, the window that should show what the Kinect is filming is not opening and there is blue bar on the bottom saying "Ready". I have Kinect for Windows SDK 1.7 installed on my computer. On the first image is a message that appears twice while the code is being debugged in which I click

Aligning captured depth and rgb images

让人想犯罪 __ 提交于 2019-12-01 01:34:08
There has been previous questions ( here , here and here ) related to my question, however my question has a different aspect to it, which I have not seen in any of the previously asked questions. I have acquired a dataset for my research using Kinect Depth sensor. This dataset is in the format of .png images for both depth and rgb stream at a specific instant. To give you more idea below are the frames: EDIT: I am adding the edge detection output here. Sobel Edge detection output for: RGB Image Depth Image Now what I am trying to do is align these two frames to give me a combined RGBZ image.

Save Kinect's color camera video stream in to .avi video

◇◆丶佛笑我妖孽 提交于 2019-11-30 21:36:29
I want to save the video streams that is captured by Kinect's Color camera to .avi format video, I tried many ways of doing this but nothing was succeeded. Has anyone successfully done this? I'm using Kinect for Windows SDK and WFP for application development Markus I guess the easiest workaround would be to use a screen capture software like http://camstudio.org/ . There is also post with the same question her: Kinect recording a video in C# WPF As far as I understand you need to to save the single frames delivered by the kinect by into a video file. This post should explain how to do it How

PCL create a pcd cloud

瘦欲@ 提交于 2019-11-30 16:57:17
This is what I have so far and I want to save pcd file from it I know I have to do something like this but not exactly sure pcl::PointCloud::PointPointXYZRGBA> cloud; pcl::io:;savePCDFileASCII("test.pcd",cloud); what do i have to add in my current code that i will have test.pcd Thanks #include <pcl/point_cloud.h> #include <pcl/point_types.h> #include <pcl/io/openni_grabber.h> #include <pcl/visualization/cloud_viewer.h> #include <pcl/common/time.h> class SimpleOpenNIProcessor { public: SimpleOpenNIProcessor () : viewer ("PCL OpenNI Viewer") {} void cloud_cb_ (const pcl::PointCloud<pcl:

How to project point cloud onto the ground plane and transfer it into an 2D image (OpenCV Mat) in Point Cloud Library?

情到浓时终转凉″ 提交于 2019-11-30 16:44:52
I want to segment stones on the ground and count the area of the stones like this : I have written OpenCV for 2 years and find it's really hard to segment the stones only using OpenCV RGB picture, so I use kinect fusion to scan the ground and get a point cloud, in which the stones is higher than the ground. I use the Point Cloud Library to segment the ground plane (in green color) like this: Now I am trying to project the rest points onto the ground plane and get a 2D image in OpenCV Mat format(the height of the original point become the value of the projected point in the ground 2D image),

How to convert Kinect raw depth info to meters in Matlab?

被刻印的时光 ゝ 提交于 2019-11-30 16:20:36
I have made some research here to understand this topic but I have not achieved good results. I'm working with a Kinect for Windows and the Kinect SDK 1.7. I'm working with matlab to process raw depth map info. First, I'm using this method ( https://stackoverflow.com/a/11732251/3416588 ) to store Kinect raw depth data to a text file. I got a list with (480x640 = 307200) elements and data like this: 23048 23048 23048 -8 -8 -8 -8 -8 -8 -8 -8 6704 6720 6720 6720 6720 6720 6720 6720 6720 6736 6736 6736 6736 6752 0 0 Then in Matlab I convert this values to binary. So, I get 16-bits numbers. The

Point-Cloud of Body Using Kinect SDK

孤人 提交于 2019-11-30 13:05:42
问题 I am making a program with the SDK, where when users are detected, The program draws a skeleton for them to follow. I recently saw a game advertised on my Xbox, Nike+ Kinect and saw how it displays a copy of the character doing something else like: http://www.swaggerseek.com/wp-content/uploads/2012/06/fcb69__xboxkinect1.jpg Or http://www.swaggerseek.com/wp-content/uploads/2012/06/fcb69__xboxkinect.jpg Can I create a point-cloud representation of the only the person detected (not any of the

结合工程实践选题调研分析同类软件产品

こ雲淡風輕ζ 提交于 2019-11-30 12:13:18
  由于智能制造和人工智能的火热,直接让机器视觉这个行业也火热起来。目前的人机交互技术已经从以计算机为中心逐步转移到以用户为中心,而手势识别技术容许用户在不需要额外工具的前提下就可以与计算机或者其他智能终端进行交互。近年来由于VR/AR技术的兴起,手势识别技术也变得越发重要,市场需求更加迫切。本次工程实践拟实现通过计算机采集双目摄像头数据,基于计算机视觉以及OpenCV工具进行图像帧预处理,进而通过机器学习方法来开发一款手势识别系统。初步达到模拟鼠标键盘来对计算机进行交互的目的。本次我将结合工程实践选题调研分析手势识别方面的软件。   我们与计算机的交互经历了“键盘鼠标”到“触控屏幕”再到“语音手势”的发展历程。手势交互是指:利用计算机图形学等技术识别人的肢体语言,并转化为命令来操作设备。手势交流作为一种新兴的蓬勃发展的交互方式,具有自然方便的优点,它将不断融合到我们的日常生活中来。微软、Leap Motion、Hand CV是手势交互领域的行业先锋,它们依托摄像头、传感器等硬件技术,计算机视觉、深度学习等软件技术将手势识别应用到了游戏设备、VR设备、车载设备和智能家居等场景中。下列是三种手势交互产品介绍。 1、微软体感设备Kinect   Kinect是微软在2010年6月14日对XBOX360体感周边外设正式发布的名字。伴随着名称的正式发布,Kinect还推出了多款配套游戏

Kinect - Map (x, y) pixel coordinates to “real world” coordinates using depth

橙三吉。 提交于 2019-11-30 10:23:59
I'm working on a project that uses the Kinect and OpenCV to export fintertip coordinates to Flash for use in games and other programs. Currently, our setup works based on color and exports fingertip points to Flash in (x, y, z) format where x and y are in Pixels and z is in Millimeters. But, we want map those (x, y) coordinates to "real world" values, like Millimeters, using that z depth value from within Flash. As I understand, the Kinect 3D depth is obtained via projecting the X-axis along the camera's horizontal, it's Y-axis along the camera's vertical, and it's Z-axis directly forward out

Precision of the kinect depth camera

一个人想着一个人 提交于 2019-11-30 10:19:37
问题 How precise is the depth camera in the kinect? range? resolution? noise? Especially I'd like to know: Are there any official specs about it from Microsoft? Are there any scientific papers on the subject? Investigations from TechBlogs? Personal experiments that are easy to reproduce? I'm collecting data for about a day now, but most of the writers don't name their sources and the values seem quite to differ... 回答1: Range: ~ 50 cm to 5 m. Can get closer (~ 40 cm) in parts, but can't have the