kinect

Kinect v2, read out data from .xef files

﹥>﹥吖頭↗ 提交于 2019-12-03 07:46:26
问题 I have collected a bunch of videos using Kinect for windows 2 using the kinect studio with file extension .xef. Now I want to write a program to load data from them and just playback or save as another format, but I have found little resource to doing so, is there any useful resource to do that? 回答1: what you can do is reading the xef file using the Kinect Studio, then going to Play (or Playback) tab and hit play, your program will start streaming. I think it's the only way to do that, doing

Kinect in HTML5 [closed]

家住魔仙堡 提交于 2019-12-03 03:12:00
问题 It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 6 years ago . Kinect for Windows has been just released on the 1st of February. Is there any good way to make it work with HTML5 games? Perhaps even somehow with the upcoming GamePad API...? Looking for ideas here, especially

Kinect / Primesense (Xtion) ROS Ubuntu through Virtual Machine (VMware)

匿名 (未验证) 提交于 2019-12-03 02:33:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Since it took me quite some time to figure out how to get Xtion (Primesense) to work on VMware I thought to share it here with you. (with Kinect I have a problem to let ROS see the device even though VMware has successfully connected it). roslaunch openni2_launch openni2.launch Running the above command gave me the error: Warning: USB events thread - failed to set priority. This might cause loss of data... I either got a single frame or no frame when running "rviz" and Add --> Image --> Image topic --> /camera/rgb/image_raw So how do I get

Why kinect color and depth won't align correctly?

匿名 (未验证) 提交于 2019-12-03 02:06:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I've been working on this problem for quite some time and am at the end of my creativity, so hopefully someone else can help point me in the right direction. I've been working with the Kinect and attempting to capture data to MATLAB. Fortunately there's quite a few ways of doing so (I'm currently using http://www.mathworks.com/matlabcentral/fileexchange/30242-kinect-matlab ). When I attempted to project the captured data to 3D, my traditional methods gave poor reconstruction results. To cut a long story short, I ended up writing a

Official Kinect SDK and Unity3d

一曲冷凌霜 提交于 2019-12-03 01:36:55
Does anyone know anything about using Kinect input for Unity3d with the official SDK? I've been assigned a project to try and integrate these two, but my super doesn't want me to use the open Kinect stuff. Last news out of the Unity site was that Kinect SDK requires 4.0 .Net and Unity3D only takes 3.5 Workarounds? Point me toward resources if you know anything about it please. The OpenNI bindings for Unity are probably the best way to go. The NITE skeleton is more stable than the Microsoft Kinect SDK, but still requires calibration (PrimeSense mentioned that they'll have a calibration-free

Python- How to configure and use Kinect

北城余情 提交于 2019-12-03 00:38:54
I have an Xbox 360 + Kinect. It's great fun to play on it, So, I was wondering if it was possible to use Python to use it and make my own games (and play on PC). Currently, I have 1.Drivers from Microsoft and the hardware.(only) 2. No experience with 3d programming. My Questions 1. Is there good and easy to use module for using Kinect on PC?? 2. And any books for the same?? I am using Windows 32 and 64 bit and Python 2.7. There is a project called Open Kinect which has many wrappers that you can make use of, including one for Python . To help you get started, there are a good few code demo's

Kinect学习(四):提取深度数据

匿名 (未验证) 提交于 2019-12-03 00:36:02
前面试着提取了Kinect的彩色数据: Kinect学习(三):获取RGB颜色数据 。这次,要试着提取深度数据。 Depth Map(深度图)是包含与视点的场景对象的表面的距离有关的信息的图像或图像通道。其中,Depth Map 类似于灰度图像,只是它的每个像素值是传感器距离物体的实际距离。通常RGB图像和Depth图像是配准的,因而像素点之间具有一对一的对应关系。 先上代码。 #include <Windows.h> #include <iostream> #include <NuiApi.h> #include <opencv2/opencv.hpp> using namespace std ; using namespace cv; int main( int argc, char *argv[]) { cv::Mat img; // 深度图,使用灰度值来表示深度数据,越远灰度越小则越暗 img.create( 480 , 640 , CV_8UC1); // 1、初始化NUI,使用深度数据 HRESULT hr = NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH); if (FAILED(hr)) { cout << "NuiIntialize failed" << endl; return hr; } // 2、定义事件句柄 //

Kinect学习(五):提取带用户ID的深度数据

匿名 (未验证) 提交于 2019-12-03 00:36:02
在前面的一篇文章中讨论了如何从Kinect获取深度图: Kinect学习(四):提取深度数据 。 这里要对其进行拓展,Kinect可以获取两种格式的深度图: 不带用户ID的深度数据,也是存储在16位的变量中,但是只使用了前12位,用来表示深度。 带用户ID的深度数据,16位,前3位表示用户ID,最多可以识别6个人,后13位表示深度; 在前一篇文章( Kinect学习(四):提取深度数据 )中是使用的就是前者,这里要使用后者。通过带用户ID的深度数据,我们可以很轻易地得到用户在图像中的位置与深度信息,利于后续的抠图等等的操作。 惯例,先上代码。 #include <Windows.h> #include <iostream> #include <NuiApi.h> #include <opencv2/opencv.hpp> using namespace std ; using namespace cv; typedef struct structBGR { BYTE blue; BYTE green; BYTE red; } BGR; // 处理深度数据的每一个像素,如果属于同一个用户的ID,那么像素就标为同种颜色,不同的用户, // 其ID不一样,颜色的标示也不一样,如果不属于某个用户的像素,那么就采用原来的深度值 BGR Depth2RGB(USHORT depthID) {

ROS进阶――kinect v1的使用

匿名 (未验证) 提交于 2019-12-03 00:30:01
环境:Ubuntu16.04+ROS Kinetic 一、kinect v1简介 二、环境配置 sudo apt-get install ros-kinetic-openni-* ros-kinetic-freenect-* rospack profile 运行命令 roslaunch freenect_launch freenect.launch 相关topic (1)RGB图像:/camera/rgb/image_color (2)深度图像:/camera/depth/image (3)点云数据(无整合RGB): /camera/depth/points 三、相机标定 。 安装功能包 rosdep install camera_calibration 运行命令 rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.108 image:=/camera/image_raw camera:=/camera size 11x8: 注意是字母x,是棋盘内部角点个数 square 0.108:为米单位的正方形边长 image:订阅的图像 camera:发布到设置参数的服务器 标定结果 (1)红外相机 (2)RGB相机 四、坐标修正 坐标修正包含两部分内容: (1)第一部分是修正相机获取数据不完整导致的误差

Kinect v2, read out data from .xef files

僤鯓⒐⒋嵵緔 提交于 2019-12-02 22:52:39
I have collected a bunch of videos using Kinect for windows 2 using the kinect studio with file extension .xef. Now I want to write a program to load data from them and just playback or save as another format, but I have found little resource to doing so, is there any useful resource to do that? what you can do is reading the xef file using the Kinect Studio, then going to Play (or Playback) tab and hit play, your program will start streaming. I think it's the only way to do that, doing like it's coming from the kinect. Actually, you can use the Kinect Studio API to read and play .xef files