kinect

Is it possible to save a user's skeleton and facial data for recognition purposes?

孤者浪人 提交于 2019-12-06 04:07:49
问题 I would like to be able to keep track of people that enter and exit a premises. Basically when the user approaches the Kinect, it will store his/her facial and skeletal data. Then upon leaving, that data will be removed. For now I am only wondering if this is possible or not with the Microsoft SDK. I have seen videos/demos of the Kinect being able to track people but my goal is to identify them uniquely . Any information will be greatly appreciated. 回答1: Yes you can save skeleton and face

multiple hits in loop after the break command

蹲街弑〆低调 提交于 2019-12-06 02:53:01
I've got a strange problem. I'm creating a NUI for application and I binded some simple gestures to right and left arrow. The problem is when I start application. When I make gesture for the first time my application is hitting 2 times in a row. After that it works 100% as I want. Only the start is the problem. I'm adding two Joints and timestamp to my history struct which is put into the ArrayList this._history.Add(new HistoryItem() { timestamp = timestamp, activeHand = hand, controlJoint = controlJoint } ); then in foreach loop I'm comparing data if (Math.Abs((hand.Position.X - item

Two 3D point cloud transformation matrix

半腔热情 提交于 2019-12-06 01:42:49
I'm trying to guess wich is the rigid transformation matrix between two 3D points clouds. The two points clouds are those ones: keypoints from the kinect (kinect_keypoints). keypoints from a 3D object (box) (object_keypoints). I have tried two options: [1]. Implementation of the algorithm to find rigid transformation. **1.Calculate the centroid of each point cloud.** **2.Center the points according to the centroid.** **3. Calculate the covariance matrix** cvSVD( &_H, _W, _U, _V, CV_SVD_U_T ); cvMatMul( _V,_U, &_R ); **4. Calculate the rotartion matrix using the SVD descomposition of the

Getting the Kinect SDK to work with visual studio 2010 in c++

隐身守侯 提交于 2019-12-06 01:34:10
问题 I've been following the guide microsoft have made for setting up the Kinect SDK with c++. The steps they have created are as follows. Include windows.h in your source code. To use the NUI API, include MSR_NuiApi.h. Location: Program Files\Microsoft Research KinectSDK\inc To use the Kinect Audio API, include MSRKinectAudio.h. Location: Program Files\Microsoft Research KinectSDK\inc Link to MSRKinectNUI.lib. Location: Program Files\Microsoft Research KinectSDK\lib Ensure that the beta SDK DLLs

2D-3D homography matrix estimation

≡放荡痞女 提交于 2019-12-06 00:52:42
问题 I am working with my Kinect on some 2D 3D image processing. Here is my problem: I have points in 3D (x,y,z) which lie on a plane. I also know the coordinates of the points on the RGB image (x,y). Now I want to estimate a 2D-3D homography matrix to estimate the (x1,y1,z1) coordinates to a random (x1,y1) point. I think that is possible, but I don't know where to start. Thanks! 回答1: What you're looking for is a camera projection matrix, not a homography. A homography maps a plane seen from a

C# image processing on Kinect video using AForge

浪子不回头ぞ 提交于 2019-12-05 22:55:02
My goal : use Kinect video to do shape recognition (large rectangle on the picture), draw rectangle on the picture to highlights the results and display. The techno I use : C# code, AForge and more specifically its shape checker http://www.aforgenet.com/articles/shape_checker/ How the magic should work : Every time a frame is ready I get the frame data as bytes array and transform it to bitmap to allow me to analyze it Apply the shape recognition algorithm Render the result... My problem : The whole process works so far but when I try to render the result in a WPF Image it lags terribly... (1

Get JointType from Body class Kinect

删除回忆录丶 提交于 2019-12-05 22:20:32
I know in the old SDK, there was a Skeleton class and you can do something like public void Compare(Skeleton skeleton) { var leftShoulderPosition = skeleton.Joints.Where(j => j.JointType == JointType.ShoulderLeft); } However, the new SDK came out and the Skeleton class is replaced by the Body class. Now, the code is throwing an error at j.JointType . Is there a workaround for this problem? With Microsoft Kinect SDK v2.0, you can get the ShoulderLeft joint (and, similarly, any other skeletal joint) as follows: body.Joints[JointType.ShoulderLeft] where body is an instance of the Body class to

How to get the position (x,y) from a Kinect depth array?

你离开我真会死。 提交于 2019-12-05 22:12:32
While working with the kinect I found out, that the bitmap and its depth information are unreliable and for some reason much more disturbed than the data from the actual byte array. I realised this when I tried get the min and max by accessing the bitmap like this for (var y = 0; y < height; y++) { var heightOffset = y * width; for (var x = 0; x < width; x++) { var index = ((width - x - 1) + heightOffset) * 4; var distance = GetDistance(depth[depthIndex], depth[depthIndex + 1]); But on the other hand I achieved much better results when I directly accessed the depth byte array (as a

silhouette extraction from depth

白昼怎懂夜的黑 提交于 2019-12-05 19:34:01
Hello I have a depth image, I want to extract the person(human) silhouette from that. I used pixel thresholding like this: for i=1:240 for j=1:320 if b(i,j)>2400 || b(i,j)<1900 c(i,j)=5000; else c(i,j)=b(i,j); end end end but there is some part left. Is there any way to remove that? Original_image: Extracted_silhouette: Shai According to this thread depth map boundaries can be found based on the direction of estimated surface normals. To estimate the direction of the surface normals, you can [dzx dzy] = gradient( depth_map ); %// horizontal and vertical derivatives of depth map n = cat( 3, dzx

Convert kinects depth to RGB

冷暖自知 提交于 2019-12-05 16:01:36
I'm using OpenNI and OpenCV (but without the latest code with openni support). If I just send the depth channel to the screen - it will look dark and difficult to understand something. So I want to show a depth channel for the user in a color but cannot find how to do that without losing of accuracy. Now I do it like that: xn::DepthMetaData xDepthMap; depthGen.GetMetaData(xDepthMap); XnDepthPixel* depthData = const_cast<XnDepthPixel*>(xDepthMap.Data()); cv::Mat depth(frame_height, frame_width, CV_16U, reinterpret_cast<void*>(depthData)); cv::Mat depthMat8UC1; depth.convertTo(depthMat8UC1, CV