kinect

I can't get kinect sdk to do speech recognition and track skeletal data at the sime time

让人想犯罪 __ 提交于 2019-12-08 05:13:19
问题 I' ve a program in witch I enabled speech recognition with.. RecognizerInfo ri = GetKinectRecognizer(); speechRecognitionEngine = new SpeechRecognitionEngine(ri.Id); // Create a grammar from grammar definition XML file. using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(fileContent))) { var g = new Grammar(memoryStream); speechRecognitionEngine.LoadGrammar(g); } speechRecognitionEngine.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(speechEngine

Capture Image from Kinect v2 Sensor

ε祈祈猫儿з 提交于 2019-12-08 04:19:54
问题 I am new to Kinect development. I am using the Kinect v2 and to create a Windows store application following the Face Basics example found here. I want to be able to capture a face image if the face is engaged. I am having trouble however capturing the image from the Win2D CanvasControl . I am not sure how else I can capture the face image. Can anyone assist me with how I might accomplish this? 回答1: In the Face Basics example, the author is storing the image captured by the Kinect sensor in a

Kinect Grip Gesture for Click

自作多情 提交于 2019-12-08 01:46:43
问题 I'm using kinect V2.0. I need to perform click using grip gesture. Is there a way to handle the Grip gesture in V2.0 like AddHandPointerGripHandler in V1.8. 回答1: In Microsoft Kinect SDK v2.0, the Body class includes two properties: Body.HandRightState Body.HandLeftState Both these properties are instances of the HandState enumeration, which specifies if the hand is: Closed (and you can detect this to trigger the Grip gesture); Lasso (which means that the hand is closed in a fist, except for a

Saving raw detph-data

余生长醉 提交于 2019-12-07 21:07:57
问题 I am trying to save my kinect raw depth-data and i dont want to use the Kinect Studio, because i need the raw-data for further calculations. I am using the kinectv2 and kinect sdk! My problem is that i just get low FPS for the saved data. Its about 15-17FPS. Here my Framereader ( in further steps i want to save colorstream also): frameReader = kinectSensor.OpenMultiSourceFrameReader(FrameSourceTypes.Depth); frameReader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived; Here the Event:

Unexpected “Cannot access a disposed object” in clean up method

耗尽温柔 提交于 2019-12-07 20:19:24
问题 I am facing a puzzling disposed object issue when I shut down my WPF application. If you spot any mistakes in my logic please point them out. I have a ColorManager class with update() method, as shown below. public void Update(ColorImageFrame frame) { byte[] pixelData = new byte[frame.PixelDataLength]; frame.CopyPixelDataTo(pixelData); if (Bitmap == null) { Bitmap = new WriteableBitmap(frame.Width, frame.Height, 96, 96, PixelFormats.Bgr32, null); } // draw bitmap RaisePropertyChanged(() =>

multiple hits in loop after the break command

笑着哭i 提交于 2019-12-07 15:39:53
问题 I've got a strange problem. I'm creating a NUI for application and I binded some simple gestures to right and left arrow. The problem is when I start application. When I make gesture for the first time my application is hitting 2 times in a row. After that it works 100% as I want. Only the start is the problem. I'm adding two Joints and timestamp to my history struct which is put into the ArrayList this._history.Add(new HistoryItem() { timestamp = timestamp, activeHand = hand, controlJoint =

How to get the position (x,y) from a Kinect depth array?

与世无争的帅哥 提交于 2019-12-07 12:27:19
问题 While working with the kinect I found out, that the bitmap and its depth information are unreliable and for some reason much more disturbed than the data from the actual byte array. I realised this when I tried get the min and max by accessing the bitmap like this for (var y = 0; y < height; y++) { var heightOffset = y * width; for (var x = 0; x < width; x++) { var index = ((width - x - 1) + heightOffset) * 4; var distance = GetDistance(depth[depthIndex], depth[depthIndex + 1]); But on the

Convert kinects depth to RGB

余生长醉 提交于 2019-12-07 11:36:45
问题 I'm using OpenNI and OpenCV (but without the latest code with openni support). If I just send the depth channel to the screen - it will look dark and difficult to understand something. So I want to show a depth channel for the user in a color but cannot find how to do that without losing of accuracy. Now I do it like that: xn::DepthMetaData xDepthMap; depthGen.GetMetaData(xDepthMap); XnDepthPixel* depthData = const_cast<XnDepthPixel*>(xDepthMap.Data()); cv::Mat depth(frame_height, frame_width

How to track ONE person with Kinect (trackingID)

家住魔仙堡 提交于 2019-12-06 21:10:54
问题 I would like to track the first person, and use this person's right hand to navigate in the application that I made. I can take over the cursor, now I just want only one person being tracked. So basically when one person is navigating in the program, and there are people walking behind him or are looking with this guy, if they move, the kinect shouldn't recognise anyone else. How can I implement this, I know it's something with the trackingId but what? :s foreach (SkeletonData s in

kinect sdk 2.0 joint angles and tracking

╄→尐↘猪︶ㄣ 提交于 2019-12-06 16:35:12
问题 How to check whether the joints you are accessing have a tracking state of Tracked. I am finding the angles of 8 of the joints and I can't seem to get the result to be displayed on my screen, public double AngleBetweenTwoVectors(Vector3D vectorA, Vector3D vectorB) { double dotProduct = 0.0; vectorA.Normalize(); vectorB.Normalize(); dotProduct = Vector3D.DotProduct(vectorA, vectorB); return (double)Math.Acos(dotProduct) / Math.PI * 180; } public double[] GetVector(Body skeleton) { Vector3D