kinect

How can I access the Kinect/device via OpenNI?

醉酒当歌 提交于 2019-11-28 06:20:30
问题 I was looking over the documentation trying to find anything that will allow me the Kinect/device? I'm trying to get accelerometer data, but not sure how. So far there were two things I've spotted in the guide and docs: XnModuleDeviceInterface/xn::ModuleDevice and XnModuleLockAwareInterface/xn::ModuleLockAwareInterface . I'm wondering if I can use the ModuleDevice Get/Set methods to talk to the device and ask for accelerometer data. If so, how can I get started? Also, I was thinking, if it

rtabmap with kinect 1(realsense d435i)

你说的曾经没有我的故事 提交于 2019-11-28 04:07:50
1、安装 Kinect 驱动: a. sudo apt-get install ros-kinetic-freenect-* b. rospack profile 2、安装rtabmap_ros sudo apt-get install ros-kinetic-rtabmap-ros 3、启动rtabmap for kinect 1: a. roslaunch freenect_launch freenect.launch depth_registration:=true b. roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" for realsense d435i: a. roslaunch realsense2_camera rs_camera.launch align_depth:=true unite_imu_method:="linear_interpolation" b. rosrun imu_filter_madgwick imu_filter_node _use_mag:=false _publish_tf:=false _world_frame:="enu" /imu/data_raw:=/camera/imu /imu/data:=/rtabmap/imu c.

Kinect sideways skeleton tracking

半腔热情 提交于 2019-11-28 03:48:34
Currently I am using the Microsoft Kinect for measuring angles between joints. Most measurements are working correctly. Whenever a person is sitting sideways (on a chair) the Kinect won't track the skeleton accurate. To illustrate my problem I've added 3 pictures of the Kinect depthview. As you can see 2 out of 3 measurements work "correctly". Whenever I lift my leg, the Kinect stops skeleton tracking correctly. Does anyone have a solution to this problem, or is this just a limitation of the Kinect? Thanks. Update 1: The JointTrackingState-Enumeration on these tracked joints shown at

How to get real world coordinates (x, y, z) from a distinct object using a Kinect

核能气质少年 提交于 2019-11-27 20:16:58
I have to get the real world coordinates (x, y, z) using Kinect. Actually, I want the x, y, z distance (in meters) from Kinect. I have to get these coordinates from a unique object (e.g. a little yellow box) in the scenario, colored in a distinct color. Here you can see an example of the scenario I want the distance (x, y, z in meters) of the yellow object in the shelf. Note that is not required a person (skeleton) in the scenario. First of all, I would like to know if it is possible and simple to do? So, I would appreciate if you send some links/code that could help me doing this task. Hayko

Kinect Depth and Image Frames Alignment

ⅰ亾dé卋堺 提交于 2019-11-27 18:17:34
问题 I am playing around with new Kinect SDK v1.0.3.190. (other related questions in stackoverflow are on previous sdk of kinect) I get depth and color streams from Kinect. As the depth and RGB streams are captured with different sensors there is a misalignment between two frames as can be seen below. Only RGB Only Depth Depth & RGB I need to align them and there is a function named MapDepthToColorImagePoint exactly for this purpose. However it doesn't seem to work. here is a equally blended

OpenCV: How to visualize a depth image

筅森魡賤 提交于 2019-11-27 17:27:56
I am using a dataset in which it has images where each pixel is a 16 bit unsigned int storing the depth value of that pixel in mm. I am trying to visualize this as a greyscale depth image by doing the following: cv::Mat depthImage; depthImage = cv::imread("coffee_mug_1_1_1_depthcrop.png", CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR ); // Read the file depthImage.convertTo(depthImage, CV_32F); // convert the image data to float type namedWindow("window"); float max = 0; for(int i = 0; i < depthImage.rows; i++){ for(int j = 0; j < depthImage.cols; j++){ if(depthImage.at<float>(i,j) > max){

Official Kinect SDK vs. Open-source alternatives

喜你入骨 提交于 2019-11-27 16:55:50
Where do they differ? What are the advantages of choosing libfreenect or OpenNI+SensorKinect, for example, over the Official SDK, and vice-versa? What are the disadvantages? Please note that the below answer is per date and some facts may very well be outdated in the near future. Current state of the Official Kinect SDK is beta 1.00.12. The first obvious difference is that the official SDK is maintained by the Microsoft Research team while OpenKinect is an open source SDK maintained by the open source community. Both has its cons and pros. The Official SDK is developed by Microsoft which also

Otsu thresholding for depth image

℡╲_俬逩灬. 提交于 2019-11-27 16:08:44
问题 I am trying to substract background from depth images acquired with kinect. When I learned what otsu thresholding is I thought that it could with it. Converting the depth image to grayscale i can hopefully apply otsu threshold to binarize the image. However I implemented (tried to implemented) this with OpenCV 2.3, it came in vain. The output image is binarized however, very unexpectedly. I did the thresholding continuously (i.e print the result to screen to analyze for each frame) and saw

Kinect SDK player detection

不打扰是莪最后的温柔 提交于 2019-11-27 16:06:26
I just created a 2 player game (like ShapeGame) but the problem is when one of the players lefts from the game scene, I can't detect which one (which player) left from the game. Think that there are 2 cars in the game. First detected player (call it player1) uses left one and player2 uses right one. When player1 left the scene, suddenly player2 takes the control of left car, and if player1 rejoins the game, player1 takes back control of the left car again and player2 takes control of the right car. int id = 0; foreach (SkeletonData data in skeletonFrame.Skeletons) { if (SkeletonTrackingState

Using Kinect with Emgu CV

偶尔善良 提交于 2019-11-27 14:13:47
问题 With EmguCV, to capture an image from a web-cam we use : Capture cap = new Capture(0); Image < Bgr, byte > nextFrame = cap.QueryFrame(); ... ... But I don't know how to capture images from my Kinect, I have tried kinectCapture class but it didn't work with me. Thanks 回答1: Basically , you need to capture and Image from the ColorStream and convert to a EmguCV Image class : Conversion to EmguCV Image from Windows BitMap (Kinect ColorStream): You have a Windows Bitmap variable, where holds Kinect