augmented-reality

ARCore: Model Showing Above Face

旧时模样 提交于 2019-12-12 09:55:34
问题 I am trying to follow the following guide as to how to create a custom 3D model for Augmented Faces but for some reason my model is going above my head, literally. My model is on the 0 x, y, and z axis as well. For code I'm using this example project provided by from Google: 回答1: Check where's a position of a pivot point of your model . 3D model's pivot must be near a pivot of Google's face mesh used in Augmented Faces. There's two ways to correct it: Fix pivot's position in 3D application

AR refernce image plane was not position properly in iOS Swift?

烈酒焚心 提交于 2019-12-12 09:54:36
问题 I am working on business card profile information using ar reference image. When reference detects it will show company ceo details, address, photo and team members information, etc. Initially image detected it plane will move to right using runAction amintion. My question is detected ar refrence, plane position was stable and its moving here and there. How to fit plane position with ar reference image. Here is the screenshot my result:[![enter image description here][1]][1] Here is the code

Marker based initial positioning with ARCore/ARKit?

最后都变了- 提交于 2019-12-12 08:10:34
问题 problem situation: Creating AR-Visualizations always at the same place (on a table) in a comfortable way. We don't want the customer to place the objects themselves like in countless ARCore/ARKit examples. I'm wondering if there is a way to implement those steps: Detect marker on the table Use the position of the marker as the initial position of the AR-Visualization and go on with SLAM-Tracking I know there is something like an Marker-Detection API included in the latest build of the

Local Area Description: Learning

不羁的心 提交于 2019-12-12 04:40:39
问题 I've just started learning something about Google Tango and i am having some troubles in understanding how to implement Local Area Description Learning. I have followed one of the How-to-guides from the documentation, the one with Placing Virtual Objects in AR and i wanted the app to remember the places where those kittens were placed. I will attach the Scene from Unity and a script where I've tried to enable SaveCurrent method for AreaDEscription. The scene from Unity and the following code

How do orthographic and perspective camera models in structure from motion differ from each other?

时间秒杀一切 提交于 2019-12-12 02:45:44
问题 Under the assumption that the camera model is orthographic, how do orthographic and perspective camera models in structure from motion? Also, how do these techniques differ from each other? 回答1: Say you have a static scene and moving camera (or equivalently, rigidly moving scene and static camera) and you want to reconstruct the scene geometry and camera motion from two or more images. The reconstruction usually based on obtaining point correspondences, that is you have some equations which

How to set background image texture after tracked found in Vuforia ImageTarget?

僤鯓⒐⒋嵵緔 提交于 2019-12-12 00:43:35
问题 I am developing AR application with Unity3D and Vuforia. I am using Vuforia ImageTarget tracking. I want to set background behind ImageTarget and it's Object after track found. How to set background image texture after tracked found in Vuforia ImageTarget? Example : I want background here this link 回答1: Actually, this is possible using a new feature of Vuforia called Extended Tracking. Select your ImageTarget and in the Inspector check the Extended Tracking box in the Image Target Behavior

How can i generate 3d markers using HTML5 and AR it on a video / image in real time?

霸气de小男生 提交于 2019-12-11 20:36:18
问题 I am still wondering how can I generate 3d markers using HTML5 and AR it on a video / image in real time? My requirement is - I want to develop AR e.g.: I get an image or live video stream on desktop and I should be able to mark some points/annotate something on it and mirror it on to iPad remotely. How can I achieve this in HTML5 app I came across wonderful tutorial here on JSARToolKit - http://www.html5rocks.com/en/tutorials/webgl/jsartoolkit_webrtc/, which I felt would be most suitable one

smooth translation combined with compas

心已入冬 提交于 2019-12-11 20:25:33
问题 I'm building augmented reality app for android and I'm using jMonkey as my 3D engine. I want to do simple thing. Move object from left side of screen to right (X axis) by changing the azimuth of view (I got it from compass). I can calculate where the object is (rendered object has gps location) so I can say am I looking directly or maybe it is on the left or right. Now my problem is smooth move and calculate the change for local translation. My questions are: 1. how can I calculate position

Need web-based AR solutions for Plane detection

大城市里の小女人 提交于 2019-12-11 18:01:32
问题 I am searching for solutions for web-based solutions for AR using marker-less detection i.e using plane detection or object detection. Tried using a-frame framework and three.js but they are only marker-based detection technique to render 3d object. 回答1: I’ve been searching for the same and found 8th Wall https://8thwall.com It works well, and have a few different pricing tiers including a free one. 来源: https://stackoverflow.com/questions/54306700/need-web-based-ar-solutions-for-plane

Render a 3D Object off the Center of the screen: ARTOOLKIT ANDROID

那年仲夏 提交于 2019-12-11 17:20:09
问题 Hi I am working on a AR android app. I am using ARToolkit6. In this app I want to view my 3D object( A Cube) on left half of the screen. With this eventually I want to display 3 cubes on the screen each on 1/3 of the screen area. I was able to scale the 3D object by tweaking ModelView Matrix. What I read so far, I think I need to tweak projection matrix to achieve my goal. I tried looking solutions online. But Couldn't get it to work. Can anyone direct me to right path? for (int trackableUID