kinect

kinect v2使用RTAB-MAP建图和导航 节点介绍

a 夏天 提交于 2020-01-14 01:05:26
安装RTAB-MAP $ sudo apt - get install ros - indigo - rtabmap - ros 指令 建立连接: roslaunch kinect2_bridge kinect2_bridge . launch publish_tf:=true 静态tf: rosrun tf static_transform_publisher 0 0 0 - 1 . 5707963267948966 0 - 1 . 5707963267948966 camera_link kinect2_link 100 启动文件: roslaunch rtabmap_ros rtabmap . launch rtabmap_args:= "--delete_db_on_start" rgb_topic:= / kinect2 / qhd / image_color_rect depth_topic:= / kinect2 / qhd / image_depth_rect camera_info_topic:= / kinect2 / qhd / camera_info 可利databaseViewer工具查看数据库 rtabmap - databaseViewer ~ / . ros / rtabmap . db 用rviz建图 roslaunch rtabmap_ros

Kinect SDK 1.6 and Joint.ScaleTo method

拜拜、爱过 提交于 2020-01-13 19:55:08
问题 I'm using Kinect SDK 1.6, and I'm following the Skeleton Tracking Funamentals tutorial of Windows Kinect Quickstart Series , available here. Even if these tutorials have been made for SDK 1.0, all was going pretty well until I followed the instructions to map the position of my hands on a custom-sized window (say 1280x720) . Dan Fernandez is using the following line of code to achieve this private void ScalePosition(FrameworkElement element, Joint joint) { // Convert the value to X/Y; Joint

openNI驱动控制kinect马达

与世无争的帅哥 提交于 2020-01-12 08:06:08
Kinect驱动中,笔者最喜欢使用的就是OpenNI驱动,主要原因就是开源,提供更多的数据(有空写一个两个驱动的对比,当然网上也早有类似的对比,但还是不够全面,也过期了 http://www.cnblogs.com/TravelingLight/archive/2011/06/20/2085149.html ,比如:Nite的最新版本中,不需要“投降姿势”就可以立刻获得skeleton的数据)。但相比于MS kinect SDK,它还有一个重要的缺点:没有提供控制kinect马达旋转的接口。于是,笔者在控制kinect俯仰旋转时,只能不停在两家版本的sdk中切来切去,或者有时懒得切驱动,直接手动暴力旋转,甚是麻烦。 今日偶见google group的openni-dev上Nicolas Tisserand的一枚神帖: Hit me “Easy way to control Kinect motor through OpenNI”。文中他成功的使用XnUSB.h(OpenNI头文件之一)完成马达的控制,他把kinect motor封装成一个类再操作,有冗余的发送控制信息的代码,只能在kinect没有初始化打开的情况下工作,无法在摄像头已打开的情况下工作,故修改见如下函数: 1 #include <XnUSB.h> 2 bool _isOpen = false; 3 4 bool

Vectorizing the Kinect real-world coordinate processing algorithm for speed

末鹿安然 提交于 2020-01-11 19:51:27
问题 I recently started working with the Kinect V2 on Linux with pylibfreenect2. When I first was able to show the depth frame data in a scatter plot I was disappointed to see that none of the depth pixels seemed to be in the correct location. Side view of a room (notice that the ceiling is curved). I did some research and realized there's some simple trig involved to do the conversions. To test I started with a pre-written function in pylibfreenect2 which accepts a column, row and a depth pixel

how to use skeletal joint to act as cursor using bounds (No gestures)

自闭症网瘾萝莉.ら 提交于 2020-01-11 12:56:10
问题 I just want to be able to do something when my skeletal joint (x,y,z) coordinates are over the x,y,z coordinates of the button . . I have the following code but somehow it doesnt work properly . .as soon as my hand moves it will do something without my hand reaching the button if (skeletonFrame != null) { //int skeletonSlot = 0; Skeleton[] skeletonData = new Skeleton[skeletonFrame.SkeletonArrayLength]; skeletonFrame.CopySkeletonDataTo(skeletonData); Skeleton playerSkeleton = (from s in

How to convert Kinect raw depth info to meters in Matlab?

笑着哭i 提交于 2020-01-10 20:14:11
问题 I have made some research here to understand this topic but I have not achieved good results. I'm working with a Kinect for Windows and the Kinect SDK 1.7. I'm working with matlab to process raw depth map info. First, I'm using this method (https://stackoverflow.com/a/11732251/3416588) to store Kinect raw depth data to a text file. I got a list with (480x640 = 307200) elements and data like this: 23048 23048 23048 -8 -8 -8 -8 -8 -8 -8 -8 6704 6720 6720 6720 6720 6720 6720 6720 6720 6736 6736

How to Display a 3D image when we have Depth and rgb Mat's in OpenCV (captured from Kinect)

北慕城南 提交于 2020-01-10 02:09:11
问题 We captured a 3d Image using Kinect with OpenNI Library and got the rgb and depth images in the form of OpenCV Mat using this code. main() { OpenNI::initialize(); puts( "Kinect initialization..." ); Device device; if ( device.open( openni::ANY_DEVICE ) != 0 ) { puts( "Kinect not found !" ); return -1; } puts( "Kinect opened" ); VideoStream depth, color; color.create( device, SENSOR_COLOR ); color.start(); puts( "Camera ok" ); depth.create( device, SENSOR_DEPTH ); depth.start(); puts( "Depth

How to fill the black patches in a kinect v1 depth image

倾然丶 夕夏残阳落幕 提交于 2020-01-07 02:44:08
问题 I am using Object segmentation dataset having following information: Introduced: IROS 2012 Device: Kinect v1 Description: 111 RGBD images of stacked and occluding objects on table. Labelling: Per-pixel segmentation into objects. link for the page: http://www.acin.tuwien.ac.at/?id=289 I am trying to use the depth map provided by the dataset. However, it seems the depth map is completely black. Original image for the above depth map I tried to do some preprocessing and normalised the image so

The type or namespace name 'InteractionHandType' could not be found Kinect SDK 1.8

依然范特西╮ 提交于 2020-01-07 02:40:47
问题 I am trying to detect closing fist (grip) gesture to control my mouse cursor in Kinect. I followed this tutorial to setup : http://dotneteers.net/blogs/vbandi/archive/2013/05/03/kinect-interactions-with-wpf-part-iii-demystifying-the-interaction-stream.aspx However, in the DummyInteractionClient.cs file, I am getting this error on the following line: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Microsoft.Kinect

How can I get past a “Library not loaded:” issue?

血红的双手。 提交于 2020-01-05 11:33:39
问题 I started playing with the Kinect and I would like to use skeleton tracking using OpenNI. Since my knowledge of c++ is limited, the easiest option is to use the ofxOpenNI addon for OpenFrameworks. I've downloaded the addon, and successfully compiled the example, but the application can't load a dylib: [Session started at 2011-02-24 11:46:27 +0000.] dyld: Library not loaded: @executable_path/./../../../data/openni/lib/libnimCodecs.dylib Referenced from: /Users/george/Downloads/FirefoxDownloads