kinect

Creating a UserTracker crashes in NITE2 python bindings

China☆狼群 提交于 2019-12-23 19:25:21
问题 I am trying to use OpenNI2 and NITE2 python bindings. I'm under Windows 7 and I have Kinect SDK1.8, OpenNI 2.2 and NITE 2.2 working without problems in Visual C++ with their 32 bits versions. I have python 2.7.5 32 bits. Now, my intention is to translate some of the examples provided in NITE to python, but I still haven't found how to create a UserTracker without the program crashing (the same goes for HandTracker). I have been able to run the toy example provided (which doesn't make use of

How to find peaks in 1d array

让人想犯罪 __ 提交于 2019-12-23 16:32:24
问题 I am reading a csv file in python and preparing a dataframe out of it. I have a Microsoft Kinect which is recording Arm Abduction exercise and generating this CSV file. I have this array of Y-Coordinates of ElbowLeft joint. You can visualize this here. Now, I want to come up with a solution which can count number of peaks or local maximum in this array. Can someone please help me to solve this problem? 回答1: You could try to smooth the data with a smoothing filter and then find all values

How to interpret the accelerometer readings from iPhone

主宰稳场 提交于 2019-12-23 09:32:35
问题 I am trying to build a Kinect and iPhone based application. I am trying to compute the acceleration of my hands over time on each of the X Y and Z axis based on the trajectory returned by the kinect. Basically I am selecting a standard time interval of 0.5 seconds or 15 frames( dt ) and 3 points, ( x0 , x1 and x2 ) over time which are separeted by 0.5 seconds. First I should mention that the position of the 3 points is mentioned in meters. By using these points I am computing two speeds( v0 =

Kinect v2 for windows: resize color frame in c#

大城市里の小女人 提交于 2019-12-23 05:45:20
问题 anyone know, if it's possible, how to decrease the kinect frame resolution for color flow? because the full-hd size is too high for my scope.Thanks I have found this code for full hd frame: private BitmapSource ToBitmap(ColorFrame frame) { int width = frame.FrameDescription.Width; int height = frame.FrameDescription.Height; PixelFormat format = PixelFormats.Bgr32; byte[] pixels = new byte[width * height * ((PixelFormats.Bgr32.BitsPerPixel + 7) / 8)]; if (frame.RawColorImageFormat ==

3D Mapping depth to RGB (Kinect OpenNI Depthmap to OpenCV RGB Cam)

北慕城南 提交于 2019-12-23 04:49:22
问题 i'm trying to map my OpenNI (1.5.4.0) Kinect 4 Windows Depthmap to a OpenCV RGB image. i have the Depthmap 640x480 with depth in mm an was trying to do the mapping like Burrus: http://burrus.name/index.php/Research/KinectCalibration i skipped the distortion part but otherwise i did everything i think: //with depth camera intrinsics, each pixel (x_d,y_d) of depth camera can be projected //to metric 3D space. with fx_d, fy_d, cx_d and cy_d the intrinsics of the depth camera. P3D.at<Vec3f>(y,x)

What is the most reliable way to record a Kinect stream for later playback?

蓝咒 提交于 2019-12-23 04:10:53
问题 I have been working with Processing and Cinder to modify Kinect input on the fly. However, I would also like to record the full stream (depth+color+accelerometer values, and whatever else is in there). I'm recording so I can try out different effects/treatments on the same material. Because I am still just learning Cinder and Processing is quite slow/laggy, I have had trouble finding advice on a strategy for capturing the stream - anything (preferably in Cinder, oF, or Processing) would be

How to perform an On-the-Fly encoding of a stream of still-pictures (video) for sending these from C# to Python? [closed]

我只是一个虾纸丫 提交于 2019-12-23 02:01:59
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 3 years ago . I'm getting both Depth & Color frames from the Kinect 2, using the Kinect SDK ( C# ), and I'm sending them to Python clients using ZeroMQ . this.shorts = new ushort[ 217088]; // 512 * 424 this.depthBytes = new Byte[ 434176]; // 512 * 424 * 2 this.colorBytes = new Byte[4147200]; // 1920 * 1080 * 4 public void

How to use a Visual Gesture Builder database with Unity3D Plugin?

China☆狼群 提交于 2019-12-23 01:52:49
问题 I'm trying to use a .gbd file from Visual Gesture Builder in my Unity3D scene. I have imported both plugins to Unity( the Kinect.2.0.1410.19000.unitypackage and Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage ). The included demos and skeleton data work fine. When tyring to import my gesture database like this: using Windows.Kinect; using Microsoft.Kinect.VisualGestureBuilder; void Start () { _Sensor = KinectSensor.GetDefault(); // compilation error for the following line, see below

generate a point cloud from a given depth image-matlab Computer Vision System Toolbox

我只是一个虾纸丫 提交于 2019-12-22 12:18:46
问题 I am a beginner in matlab, I have purchased Computer Vision System Toolbox. I have being given 400 of depth images (.PNG images). I would like to create a point cloud for each image. I looked at the documentation of Computer Vision System Toolbox, and there is an example of converting depth image to point cloud (http://uk.mathworks.com/help/vision/ref/depthtopointcloud.html): [xyzPoints,flippedDepthImage] = depthToPointCloud(depthImage,depthDevice) depthDevice = imaq.VideoDevice('kinect',2)

silhouette extraction from depth

牧云@^-^@ 提交于 2019-12-22 10:57:55
问题 Hello I have a depth image, I want to extract the person(human) silhouette from that. I used pixel thresholding like this: for i=1:240 for j=1:320 if b(i,j)>2400 || b(i,j)<1900 c(i,j)=5000; else c(i,j)=b(i,j); end end end but there is some part left. Is there any way to remove that? Original_image: Extracted_silhouette: 回答1: According to this thread depth map boundaries can be found based on the direction of estimated surface normals. To estimate the direction of the surface normals, you can