google-project-tango

projecting Tango 3D point to screen Google Project Tango

扶醉桌前 提交于 2019-12-02 09:58:13
Ptoject Tango provides a point cloud, how can you get the position in pixels of a 3D point in the point cloud in meters? I tried using the projection matrix but I get very small values (0.5,1.3 etc) instead of say 1234,324 (in pixels). I include the code I have tried //Get the current rotation matrix Matrix4 projMatrix = mRenderer.getCurrentCamera().getProjectionMatrix(); //Get all the points in the pointcloud and store them as 3D points FloatBuffer pointsBuffer = mPointCloudManager.updateAndGetLatestPointCloudRenderBuffer().floatBuffer; Vector3[] points3D = new Vector3[pointsBuffer.capacity()

Convert device pose to camera pose

我的未来我决定 提交于 2019-12-02 09:46:00
I'm using the camera intrinsic (fx, fy, cx, cy, width, hight) to store a depth image of TangoXyzIjData.xyz buffer. Therefore I calculate for each point of xyz the corresponding image point and store its z value x' = (fx * x) / z + cx y' = (fy * y) / z + cy depthImage[x'][y'] = z Now I would like to store the corresponding pose data as well. I'm using the timestamp of TangoXyzIjData.timestamp and the following function getPoseAtTime(double timestamp, TangoCoordinateFramePair framePair) with framepair new TangoCoordinateFramePair(TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE, TangoPoseData

How to take high-res picture while sensing depth using project tango

孤街醉人 提交于 2019-12-02 08:51:31
How take picture using project tango ? I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ? Note that java API public void onFrameAvailable(int cameraId) { if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) { mTangoCameraPreview.onFrameAvailable(); } } does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will have to use TangoCameraPreview. Thanks You don't have to use TangoCameraPreview to get frames in Java.

Tango raw depth data - update? [closed]

泄露秘密 提交于 2019-12-02 03:05:28
问题 Closed . This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 2 years ago . I bought the lenovo phab 2 pro supporting the tango project from google. Using this setup, it it possible to obtain depth data in form of a pointcloud. But this is not what I need. I would prefer to obtain data in a more raw format like possible to be obtained by the Kinect, where each pixel of the

Tango raw depth data - update? [closed]

痴心易碎 提交于 2019-12-02 01:57:14
I bought the lenovo phab 2 pro supporting the tango project from google. Using this setup, it it possible to obtain depth data in form of a pointcloud. But this is not what I need. I would prefer to obtain data in a more raw format like possible to be obtained by the Kinect, where each pixel of the imageplane is assigned a depth value. My question therefore: Is the depth data of the phab2 (or any tango device) possible to be obtained in such a raw format where each pixel is assigned a depth value? My research lead me to countless unsolved cases (typing tango raw data or similar in google,

MediaCodec.dequeueOutputBuffer taking very long when encoding h264 on Android

流过昼夜 提交于 2019-12-02 01:21:22
问题 I'm trying to encode h264 video on Android for real-time video streaming using MediaCodec but dequeueOutputBuffer keeps taking very long (actually it's very fast sometimes but very slow at other times, see log output below). I've seen it go even up to 200ms for the output buffer to be ready. Is there something I'm doing wrong with my code or do you think this is an issue with the OMX.Nvidia.h264.encoder? Maybe I need to downsample the image from 1280x720 to something smaller? Or maybe I need

Tango SDK Marker Detection (Release May 2017 Hopak): only 4 bit in AR tags?

谁说我不能喝 提交于 2019-12-01 12:35:35
with the marker detection example ( https://github.com/googlesamples/tango-examples-c/tree/master/cpp_marker_detection_example ) I tried to detect AR Tags with id's higher than 16. For that, I generated Alvar markers using http://wiki.ros.org/ar_track_alvar . (I have read these tags are Alvar in a comment in the google example source code). Is the SDK fixed to detect maximum only 16 ids (4bits)? If yes Does someone know if this will change in the future? I am not interested in QR codes, since they don't seem to be robustly detected from perspective angles. 来源: https://stackoverflow.com

Tango SDK Marker Detection (Release May 2017 Hopak): only 4 bit in AR tags?

爱⌒轻易说出口 提交于 2019-12-01 10:16:10
问题 with the marker detection example (https://github.com/googlesamples/tango-examples-c/tree/master/cpp_marker_detection_example) I tried to detect AR Tags with id's higher than 16. For that, I generated Alvar markers using http://wiki.ros.org/ar_track_alvar. (I have read these tags are Alvar in a comment in the google example source code). Is the SDK fixed to detect maximum only 16 ids (4bits)? If yes Does someone know if this will change in the future? I am not interested in QR codes, since

exactly how do we compute timestamp differentials?

匆匆过客 提交于 2019-12-01 06:06:48
问题 We get timestamps as a double value for pose, picture, and point data - they aren't always aligned - how do I calculate the temporal distance between two time stamps ? Yes, I know how to subtract two doubles, but I'm not at all sure of how the delta corresponds to time. 回答1: I have some interesting timestamp data that sheds light on your question, without exactly answering it. I have been trying to match up depth frames with image frames - just as a lot of people posting under this Tango tag.

Project Tango: Converting between coordinate systems and merging point clouds

只愿长相守 提交于 2019-12-01 01:18:00
I am trying to convert point clouds sampled and stored in XYZij data (which, according to the document , stores data in camera space) into a world coordinate system so that they can be merged. The frame pair I use for the Tango listener has COORDINATE_FRAME_START_OF_SERVICE as the base frame and COORDINATE_FRAME_DEVICE as the target frame. This is the way I implement the transformation: Retrieve the rotation quaternion from TangoPoseData.getRotationAsFloats() as q_r , and the point position from XYZij as p . Apply the following rotation, where q_mult is a helper method computing the Hamilton