google-project-tango

How is it possible to get tracked features from tango APIs used for motion tracking

寵の児 提交于 2020-01-03 04:49:07
问题 As it is shown in Project Tango GTC Video, some local features are extracted and tracked for motion estimation that is then fused with accelerometer data. Since any developer may need to track features to develop his/her apps, I was wondering if there would be a way to get those features through APIs . Although it is possible to extract some point and retrieve their flow using estimated 6DOF pose returned by the APIs , it adds extra overhead. Another issue with this approach is that the pure

Relocation of an ADF in Learning mode not working?

烈酒焚心 提交于 2020-01-03 02:27:49
问题 I have a strange behaviour when trying to append to an existing ADF: I'm loading an ADF which was just recorded and the device can easy relocate on. Once I load the same ADF with learning mode on (in order to extend the existing ADF) the device cannot relocate on it. It's easy to reproduce (see the link to the video): - Record an ADF - Load it, make sure the device can re-locate - Load it again with learning mode "on", the device can no longer re-locate on it I tried the explorer-app the java

Unity plugin using OpenGL for Project Tango

主宰稳场 提交于 2020-01-02 10:06:38
问题 I am developing an AR app using Unity for Project Tango. One of the things I am trying to accomplish is getting the frame image from the device while using the AR example they provided with the SDK - https://github.com/googlesamples/tango-examples-unity The problem is that they are using the IExperimentalTangoVideoOverlay which doesn't return the frame buffer (The image is converted from YUV to RGB in the shader). I've registered to OnExperimentalTangoImageAvailable event and called an

Occlusion in AR

倾然丶 夕夏残阳落幕 提交于 2020-01-02 08:33:02
问题 I'm trying make virtual objects be hidden when a real world object is positioned in front of it, but not having any luck with it, i've been playing with the occlusion settings in unity but the virtual objects do not become hidden? 回答1: You could fix this problem by building your augmented reality scene with the experimental meshing enabled. Here is an example of the concept https://www.youtube.com/watch?v=sn3bhnPlfcw You then could ray cast from camera to the virtual object and turn off the

How do I begin working on the Project Tango?

▼魔方 西西 提交于 2020-01-01 05:54:39
问题 after a couple of weeks I have been unable to get the android set of tools to a functioning level with c++ before and have been given the opportunity of using a project tango, and though that sounds awesome and wondrous and would open a world of opportunity for working with VR... I feel like I am stuck at step -4. My understanding is limited, so bear with me. I stumbled upon the PCL built for running algorithms on point cloud data, it was open source and appeared like a wonderful solution, it

How do I begin working on the Project Tango?

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-01 05:54:26
问题 after a couple of weeks I have been unable to get the android set of tools to a functioning level with c++ before and have been given the opportunity of using a project tango, and though that sounds awesome and wondrous and would open a world of opportunity for working with VR... I feel like I am stuck at step -4. My understanding is limited, so bear with me. I stumbled upon the PCL built for running algorithms on point cloud data, it was open source and appeared like a wonderful solution, it

Save frame from TangoService_connectOnFrameAvailable

大憨熊 提交于 2019-12-31 05:52:30
问题 How can I save a frame via TangoService_connectOnFrameAvailable() and display it correctly on my computer? As this reference page mentions, the pixels are stored in the HAL_PIXEL_FORMAT_YV12 format. In my callback function for TangoService_connectOnFrameAvailable, I save the frame like this: static void onColorFrameAvailable(void* context, TangoCameraId id, const TangoImageBuffer* buffer) { ... std::ofstream fp; fp.open(imagefile, std::ios::out | std::ios::binary ); int offset = 0; for(int i

(Project Tango) Rotation and translation of point clouds with area learning

时光毁灭记忆、已成空白 提交于 2019-12-31 05:31:24
问题 I have a java application that, when I press a button, records point clouds xyz coordinates together with the right pose. What I want is to pick an object, record a pointCloud in the front and one in the back, then merge the 2 clouds. Obviously to get a reasonable result I need to translate and rotate one or both the clouds I recorded. But I'm new to Tango Project and there are some things I should be missing. I have read about this in this post. There, @Jason Guo talks about those matrix:

ROS Rviz visualization of Tango Pose data

杀马特。学长 韩版系。学妹 提交于 2019-12-25 17:07:39
问题 We have modified sample code for the C API so Tango pose data (position (x,y,z) and quaternion (x,y,z,w) ) is published as PoseStamped ROS messages. We are attempting to visualize the pose using Rviz. The pose data appears to need some transformation as the rotation of the Rviz arrow does not match the behavior of the Tango when we move it around. We realize that in the sample code, before visualization on the Tango screen, the pose data is transformed into a 4x4 Pose matrix (function

ISSUE: Running PCL Library with Android Project

丶灬走出姿态 提交于 2019-12-25 07:16:16
问题 I had compiled the PCL-SuperBuild folder as these links described: Link 1: https://hcteq.wordpress.com/2014/07/14/compiling-pcl-for-android-in-windows-cmake-gui/# Link 2: http://www.hirotakaster.com/weblog/how-to-build-pcl-for-android-memo/ It Completed successfully. However, I don't know how to use the library in my project, Can anyone elaborate in this? I'm trying to run the code in https://github.com/roomplan/tango-examples-java.git, the Point Cloud with PCL one, and I have tried to write