Vectorizing the Kinect real-world coordinate processing algorithm for speed
问题 I recently started working with the Kinect V2 on Linux with pylibfreenect2. When I first was able to show the depth frame data in a scatter plot I was disappointed to see that none of the depth pixels seemed to be in the correct location. Side view of a room (notice that the ceiling is curved). I did some research and realized there's some simple trig involved to do the conversions. To test I started with a pre-written function in pylibfreenect2 which accepts a column, row and a depth pixel