kinect

All frames from Kinect at 30FPS

▼魔方 西西 提交于 2019-12-11 04:38:51
问题 I am using Microsoft Kinect SDK and I would like to know whether it is possible to get Depth Frame, Color Frame as well as the skeleton data for all the frames at 30fps? Using Kinect Explorer I can see that the color and the depth frame are nearly at 30fps, but as soon as I choose the view the skeleton, it drops to around 15-20fps. 回答1: Yes, it is possible to capture color/depth at 30fps while capturing the skeleton. See image below, just in case you think me dodgy. :) This is a raw Kinect

Weird results using Kinect for Windsows SDK, XNA and Seated Mode

醉酒当歌 提交于 2019-12-11 04:16:33
问题 I have a weird problem when using Seated Mode in the Xna Example of the Kinect for Windows Developer Toolkit 1.5.1. The only thing I add to the code is flowing line. this.Sensor.SkeletonStream.TrackingMode = SkeletonTrackingMode.Seated; Did anybody tried using XNA and Seated Mode without having this Problem? 回答1: This looks like the XNA sample tries to draw all joints. Since only a limited set of joints is available in seated mode, the remaining joints are drawn at a default position. 回答2: I

C# Kinect for Windows: How to combining/overlaying the skeleton and color stream/image?

不羁的心 提交于 2019-12-11 03:45:18
问题 i'm trying to code a programm that shows me the current colorstream and overlays/combines it with the skeleton stream. i took the microsoft skeletonviewer example and tried to implement the colorstream. Well that worked, the colorstream is running, but the skeleton disappeared... So now my question for you: How can i enable the skeleton so that if person stands right in front of the kinect, the colorstream and the skeleton appears? here's my code: namespace Microsoft.Samples.Kinect

How to Convert Kinect rgb and depth images to Real world coordinate xyz?

孤街浪徒 提交于 2019-12-11 02:39:49
问题 I am using kinect recently to find distance of some markers, so i'm stuck in converting kinect rgb and depth images that are in pixel, to real world coordinate xyz that a want in meters. 回答1: You can use the depthToPointCloud function in the Computer Vision System Toolbox for MATLAB. 回答2: Please, note that in Kinect SDK 1.8 (Kinect 1), it's not possible to convert from RGB image space to world space: only from depth image space to world space. Other possible conversions are: Depth -> RGB

Is there any non-obvious difference between using C++ or C# for a Windows Kinect application? (e.g. performance, features) [closed]

别来无恙 提交于 2019-12-11 02:32:12
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . Is it just a matter of preference and familiarity or the language makes an actual difference? 回答1: Both are treated as first-class

Processing Kinect Depth data in MATLAB

别说谁变了你拦得住时间么 提交于 2019-12-10 22:09:31
问题 So I used Kinect to obtain some depth images and they are now saved. So if I want to process this depth image to get the Z value (i.e. the distance from the object to the Kinect) how should I do that? I have been doing some research online and found out that I need to be saving the image as a 16bit depth image for the depth values to be stored instead of an 8 bit depth image which can only store up to 256 values. based on: Save Kinect depth image in Matlab? But I still do not quite understand

Kinect error enabling stream

心不动则不痛 提交于 2019-12-10 17:54:30
问题 This is my first time trying to make a program that uses the Kinect and I have NO idea why I keep getting a null error. Maybe someone who knows the KinectSDK better can help? public ProjKinect() { InitializeComponent(); updateSensor(0);//set current sensor as 0 since we just started } public void updateSensor(int sensorI) { refreshSensors();//see if any new ones connected if (sensorI >= sensors.Length)//if it goes to end, then repeat { sensorI = 0; } currentSensorInt = sensorI; if

Using Microsoft Kinect with Opencv 3.0.0

好久不见. 提交于 2019-12-10 12:04:27
问题 Hello I am trying to get disparity maps from a Microsoft kinect for xbox 360. I have opencv 3.0.0 and openni2 with libfreenect installed. When I run my code #include "opencv2/opencv.hpp" using namespace cv; int main(int, char**){ VideoCapture capture( CAP_OPENNI2 ); namedWindow("win",1); for(;;){ Mat depthMap; capture >> depthMap; imshow("win",depthMap); if( waitKey( 30 ) >= 0 ) break; } return 0; } My kinect starts projecting the IR pattern but then I get a bunch of errors OpenNI2

EmguCv TypeInitializationException Thrown by EmguCv.CV.CvInvoke

笑着哭i 提交于 2019-12-10 11:37:40
问题 Let me start off by saying that I have indeed followed many tutorials such as the one located on EmguCv's main site in their entirety but get a TypeInitializationException thrown. Now, listen closely because here comes the extremely weird part. I'll start by saying that there are three "levels" of my problem, however, the code in all "levels" is EXACTLY the same without even the slightest of change. This would naturally point to that I have a reference or linkage problem, but again I've

The angle between an object and Kinect's optic axis

て烟熏妆下的殇ゞ 提交于 2019-12-10 10:35:24
问题 Here's my Setup: Kinect mounted on an actuator for horizontal movement. Here's a short demo of what I am doing. http://www.youtube.com/watch?v=X1aSMvDQhDM Here's my Scenario: Please refer to above figure. Assume the distance between the center of the Actuator,'M', and the Center of the optic axis of Kinect, 'C', is 'dx'(millimeters), the depth information 'D'(millimeters) obtained from Kinect is relative to the optic axis. Since I now have a actuator mounted onto the Center of Kinect, the