augmented-reality

How Can I integrate Unity-Build App with native iOS App?

无人久伴 提交于 2020-01-16 01:17:10
问题 is there a way to include the Unity-Build app in a view controller's view ? I was following this tutorial : https://medium.com/@IronEqual/how-to-embed-a-unity-game-into-an-ios-native-swift-app-772a0b65c82 it was working fine with the Unity apps that don't include ARKit Plugin, but after adding ARkit to the app, I get this Building error : Desktop/SwiftTut/SwiftTut/Unity/Classes/Native/Bulk_mscorlib_10.cpp:15950:81: No member named ‘Contexts’ in namespace ‘il2cpp::icalls::mscorlib::System:

awe.js Augmented Reality adding text

谁说我不能喝 提交于 2020-01-15 08:28:08
问题 I'm trying to add text to awe.js project, using this tutorial I have came up with an attempt, https://www.sitepoint.com/augmented-reality-in-the-browser-with-awe-js/. awe.projections.add({ id: 'text', geometry: {shape: 'text', text: 'Hello World', font: 'times new roman', weight: 'normal', style: 'normal'}, rotation: {y: 45}, position: {x: -5, y: -31, z: -5}, material: { type: 'phong', color: 0xFF0000 } }, {poi_id: 'marker'}); Then I have done some more research on the subject and tried in

awe.js Augmented Reality adding text

痴心易碎 提交于 2020-01-15 08:28:07
问题 I'm trying to add text to awe.js project, using this tutorial I have came up with an attempt, https://www.sitepoint.com/augmented-reality-in-the-browser-with-awe-js/. awe.projections.add({ id: 'text', geometry: {shape: 'text', text: 'Hello World', font: 'times new roman', weight: 'normal', style: 'normal'}, rotation: {y: 45}, position: {x: -5, y: -31, z: -5}, material: { type: 'phong', color: 0xFF0000 } }, {poi_id: 'marker'}); Then I have done some more research on the subject and tried in

ARCore – Object does not show in correct depth in Face Augmentation

ぐ巨炮叔叔 提交于 2020-01-15 06:27:26
问题 I tried to place an object on face. But does not understand how to set depth in object. Like when I add 3d object like spects frames on face. It does not show in correct depth. 回答1: When you use Augmented Faces feature, it's worth to note that if any face is detected, ARCore at first puts a Face Anchor (which must be located behind a nose or, more precise to say, inside a skull), and secondly ARCore puts a canonical mask – its pivot point resides on the same place as anchor does. Hence, if

Problematically rotate 3D model using Sceneform ecosystem

自古美人都是妖i 提交于 2020-01-14 07:41:31
问题 I'm using the Sceneform SDK in Android Project. I have sfb and the sfa objects in my project, and I want the initial rotation of my object to be rotated 90 degrees. How can I achieve it? I found the next code in these files and I changed the scale. But I didn't find a way for the rotation. model: { attributes: [ "Position", "TexCoord", "Orientation", ], collision: {}, file: "sampledata/models/redmixer.obj", name: "redmixer", scale: 0.010015, }, 回答1: I have used setLocalRotation to

Update a Microsoft AzureSpatialAnchor not possible

女生的网名这么多〃 提交于 2020-01-14 05:40:25
问题 I develop an unity application an I am working with azure spatial anchors. I found no possibility to update a already existing cloudAnchor. I tried this: protected virtual async Task SaveCurrentObjectAnchorToCloudAsync() { // Get the cloud-native anchor behavior CloudNativeAnchor cna = cubeInstance.GetComponent<CloudNativeAnchor>(); // If the cloud portion of the anchor hasn't been created yet, create it if (cna.CloudAnchor == null) { cna.NativeToCloud(); } // Get the cloud portion of the

How to augment cube onto a specific position using 3x3 homography

不羁岁月 提交于 2020-01-12 20:57:13
问题 I am able to track 4 coordinates over different images of the same scene by calculating a 3x3 homography between them. Doing this I can overlay other 2D images onto these coordinates. I am wondering if I could use this homography to augment a cube onto this position instead using opengl? I think the 3x3 matrix doesn't give enough information but if I know the camera calibration matrix can I get enough to create a model view matrix to do this? Thank you for any help you can give. 回答1: If you

How are the ARKit People Occlusion samples being done?

好久不见. 提交于 2020-01-12 10:46:19
问题 This may be an obscure question, but I see lots of very cool samples online of how people are using the new ARKit people occlusion technology in ARKit 3 to effectively "separate" the people from the background, and apply some sort of filtering to the "people" (see here). In looking at Apple's provided source code and documentation, I see that I can retrieve the segmentationBuffer from an ARFrame, which I've done, like so; func session(_ session: ARSession, didUpdate frame: ARFrame) { let

How are the ARKit People Occlusion samples being done?

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-12 10:45:12
问题 This may be an obscure question, but I see lots of very cool samples online of how people are using the new ARKit people occlusion technology in ARKit 3 to effectively "separate" the people from the background, and apply some sort of filtering to the "people" (see here). In looking at Apple's provided source code and documentation, I see that I can retrieve the segmentationBuffer from an ARFrame, which I've done, like so; func session(_ session: ARSession, didUpdate frame: ARFrame) { let

Does Phonegap support WebRTC?

…衆ロ難τιáo~ 提交于 2020-01-11 05:48:09
问题 I want to build an augmented reality app. I was thinking of using something like the Wikitude SDK here http://www.wikitude.com/developer or using this javascript library https://github.com/mtschirs/js-objectdetect js-objectdetect which I would prefer however, it relies on webRTC support which of course is fine using a modern browser but I'm not quite sure if PhoneGap also supports it. In addition, if anyone knows how I can superimpose my 3d models over an object, that'd be great. I don't know