kinect

How to convert points in depth space to color space in Kinect without using Kinect SDK functions?

烂漫一生 提交于 2020-01-01 01:15:06
问题 I am doing a augmented reality application with 3D objects overlay on top of color video of user. Kinect version 1.7 is used and rendering of virtual objects are done in OpenGL. I have manage to overlay 3D objects on depth video successfully simply by using the intrinsic constants for depth camera from the NuiSensor.h header and compute a projection matrix based on the formula I have found on http://ksimek.github.io/2013/06/03/calibrated_cameras_in_opengl/. The 3D objects rendered with this

What is the difference between OpenNI and OpenKinect?

感情迁移 提交于 2019-12-31 08:52:12
问题 I am considering using Kinect in one my projects, but I am totally lost between all the libraries. Don't know what is what exactly. Most importantly I am reading stuff about OpenNI and OpenKinect. But don't know their relation/differences. PS. I am using Ubuntu or Mac. 回答1: OpenKinect is a community of people, not a library. The OpenKinect community releases the libfreenect Kinect driver. libfreenect and OpenNI+SensorKinect are two competing, opensource libraries/drivers. libfreenect (Apache

mapping an ellipse to a joint in kinect sdk 1.5

断了今生、忘了曾经 提交于 2019-12-31 05:21:05
问题 i want to map an ellipse to the hand joint.And the ellipse has to move as my hand joint will move. Please provide me some reference links that help me in doing programs using kinect Sdk 1.5. Thank you 回答1: Although what @Heisenbug would work, there is a much simpler way in WPF. You can find a tutorial on it at Channel 9's Skeleton Fundamentals. Basically you need a Canvas, and however many ellipses you want. Here is the code XAML <Window x:Class="SkeletalTracking.MainWindow" xmlns="http:/

OpenCV OpenNI calibrate kinect

那年仲夏 提交于 2019-12-31 03:03:23
问题 I use home to capture by kinect: capture.retrieve( depthMap, CV_CAP_OPENNI_DEPTH_MAP ) capture.retrieve( bgrImage, CV_CAP_OPENNI_BGR_IMAGE ) Now I don't know if I have to calibrate kinect to have depth pixel value correct. That is, if I take a pixel (u, v) from the image RBG, get the correct value of depth taking the pixels (u, v) from the image depth? depthMap.at<uchar>(u,v) Any help is much appreciated. Thanks! 回答1: You can check if registration is on like so: cout << "REGISTRATION " <<

Save Kinect's color camera video stream in to .avi video

谁说我不能喝 提交于 2019-12-30 06:48:55
问题 I want to save the video streams that is captured by Kinect's Color camera to .avi format video, I tried many ways of doing this but nothing was succeeded. Has anyone successfully done this? I'm using Kinect for Windows SDK and WFP for application development 回答1: I guess the easiest workaround would be to use a screen capture software like http://camstudio.org/. There is also post with the same question her: Kinect recording a video in C# WPF As far as I understand you need to to save the

PCL create a pcd cloud

♀尐吖头ヾ 提交于 2019-12-30 05:30:08
问题 This is what I have so far and I want to save pcd file from it I know I have to do something like this but not exactly sure pcl::PointCloud::PointPointXYZRGBA> cloud; pcl::io:;savePCDFileASCII("test.pcd",cloud); what do i have to add in my current code that i will have test.pcd Thanks #include <pcl/point_cloud.h> #include <pcl/point_types.h> #include <pcl/io/openni_grabber.h> #include <pcl/visualization/cloud_viewer.h> #include <pcl/common/time.h> class SimpleOpenNIProcessor { public:

kinect/ processing / simple openni - point cloud data not being output properly

空扰寡人 提交于 2019-12-30 05:29:13
问题 I've created a processing sketch which saves each frame of point cloud data from the kinect to a text file, where each line of the file is a point (or vertex) that the kinect has registered. I plan to pull the data into a 3d program to visualize the animation in 3d space and apply various effects. The problem is, when I do this, the first frame seems proper, and the rest of the frames seem to be spitting out what looks like the first image, plus a bunch of random noise. This is my code, in

Kinect for XBox 360 and Kinect SDK 1.5

只愿长相守 提交于 2019-12-30 01:24:41
问题 Microsoft has recently released Kinect SDK 1.5 and some very neat associated features such as face tracking. I have a Kinect sensor for XBox 360 and Windows 7 (driver, Kinect studio) do not seem to recognize the device. Can anyone advise if this is an "operator error" or if SDK 1.5 indeed does not support Kinect for XBox sensor but only Kinect for Windows (I have USB and power adapter for it). Thank you, Edmon 回答1: As Chris Ortner pointed out, the Kinect for Xbox sensor is compatible with

Kinect background removal

大城市里の小女人 提交于 2019-12-25 18:38:36
问题 I followed the code provided by Robert Levy at this link: http://channel9.msdn.com/coding4fun/kinect/Display-Kinect-color-image-containing-only-players-aka-background-removal I tried implementing it into my existing code, and have had inconsistent results. If the user is in the kinect's field of view when the program starts up it will remove the background some of the time. If the user walks into the field of view it will not pick them up. namespace KinectUserRecognition { public partial

how to scale joints using the new Kinect SDK in C#

╄→гoц情女王★ 提交于 2019-12-25 09:36:38
问题 Since ScaleTo() has been removed from the new Kinect SDK how is the scaling going to be done with the new SDK??? 回答1: You can use the Coding4Fun Kinect Toolkit: http://c4fkinect.codeplex.com/ Download the library, include it in your project's resources and then add a reference to it in your using statements. After you do that, you will have a scaleTo() function for individual joints. e.g., rightHand.scaleTo(640, 480) . The library website has some information on using it. You can also find