augmented-reality

Image Tracking using AR.js - Problem with Custom Image Descriptors

自闭症网瘾萝莉.ら 提交于 2020-06-17 16:22:25
问题 I am trying to generate an AR scene using Image Tracking based on the tutorial in AR.js documentation. When I was using the sample URL as provided in the demo in Codepen, it worked but when I specified the URL to be the path to my own generated image descriptors in my local machine, I got this error: "Error in loading marker on Worker 404" Since it was working well when I used the image descriptors provided in the demo, I assume it's something to do with the image descriptors that I generated

Image Tracking using AR.js - Problem with Custom Image Descriptors

有些话、适合烂在心里 提交于 2020-06-17 16:19:44
问题 I am trying to generate an AR scene using Image Tracking based on the tutorial in AR.js documentation. When I was using the sample URL as provided in the demo in Codepen, it worked but when I specified the URL to be the path to my own generated image descriptors in my local machine, I got this error: "Error in loading marker on Worker 404" Since it was working well when I used the image descriptors provided in the demo, I assume it's something to do with the image descriptors that I generated

How do I spin and add a linear force to an Entity loaded from Reality Composer?

倖福魔咒の 提交于 2020-06-13 06:04:26
问题 I've constructed a scene in Reality Composer that has a ball that starts the scene floating in the air. I'm attempting to programmatically throw the ball while simultaneously spinning it. I tried to do this through behaviors in Reality Composer, but can't get both behaviors to work simultaneously, also, the ball immediately falls to the ground once I start the animation. My second attempt was to forgo the behavior route and I attempted to do this programmatically, but I can not add a force,

How do I spin and add a linear force to an Entity loaded from Reality Composer?

风流意气都作罢 提交于 2020-06-13 06:04:06
问题 I've constructed a scene in Reality Composer that has a ball that starts the scene floating in the air. I'm attempting to programmatically throw the ball while simultaneously spinning it. I tried to do this through behaviors in Reality Composer, but can't get both behaviors to work simultaneously, also, the ball immediately falls to the ground once I start the animation. My second attempt was to forgo the behavior route and I attempted to do this programmatically, but I can not add a force,

FaceTracking in ARKit – How to display the “lookAtPoint” on the screen

你离开我真会死。 提交于 2020-06-10 19:21:08
问题 The ARFaceTrackingConfiguration of ARKit places ARFaceAnchor with information about the position and orientation of the face onto the scene. Among others, this anchor has the lookAtPoint property that I'm interested in. I know that this vector is relative to the face. How can I draw a point on the screen for this position, meaning how can I translate this point's coordinates? 回答1: .lookAtPoint instance property is for direction's estimation only. Apple documentation says: .lookAtPoint is a

FaceTracking in ARKit – How to display the “lookAtPoint” on the screen

心不动则不痛 提交于 2020-06-10 19:13:48
问题 The ARFaceTrackingConfiguration of ARKit places ARFaceAnchor with information about the position and orientation of the face onto the scene. Among others, this anchor has the lookAtPoint property that I'm interested in. I know that this vector is relative to the face. How can I draw a point on the screen for this position, meaning how can I translate this point's coordinates? 回答1: .lookAtPoint instance property is for direction's estimation only. Apple documentation says: .lookAtPoint is a

ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR?

人走茶凉 提交于 2020-06-10 01:42:29
问题 How can I export the ARMeshGeometry generated by the new SceneReconstruction API on the latest iPad Pro to an .obj file? Here's SceneReconstruction documentation. 回答1: Starting with Apple's Visualising Scene Scemantics sample app, you can retrieve the ARMeshGeometry object from the first anchor in the frame. The easiest approach to exporting the data is to first convert it to an MDLMesh: extension ARMeshGeometry { func toMDLMesh(device: MTLDevice) -> MDLMesh { let allocator =

Implementing Codable for ARAnchor: “cannot be automatically synthesized in an extension…”

喜欢而已 提交于 2020-06-08 13:17:04
问题 The code extension ARAnchor: Codable {} produces the error: "Implementation of 'Decodable' cannot be automatically synthesized in an extension in a different file to the type". What does this mean? I was able to implement Codable for another native type in a similar fashion without any errors. 回答1: You could create a container object that implements Codable and then use that to encode and decode the anchor. I tried this code in a playground and it work for me. You'll want to adapt it for

Track camera position with RealityKit

久未见 提交于 2020-05-30 09:39:44
问题 How can you track the position of the camera using RealityKit? Several examples are using SceneKit, but I found none using RealityKit. I need a function such as: func session(_ session: ARSession, didUpdate frame: ARFrame) { // Do something with the new transform let currentTransform = frame.camera.transform doSomething(with: currentTransform) } 回答1: Using ARView Camera Transform: You can access the ARView Camera Transform using the following method: var cameraTransform: Transform The

ARKit – Viewport Size vs Real Screen Resolution

℡╲_俬逩灬. 提交于 2020-05-28 07:45:09
问题 I am writing an ARKit app that uses ARSCNView hitTest function. Also the app sends captured images to the server for some analysis. I notices when I do: let viewportSize = sceneView.snapshot().size let viewSize = sceneView.bounds.size then the first one is twice as large as the second one. The questions are: 1.Why there is a difference? 2.What "size" (e.g. coordinates) is used in hitTest? 回答1: Why there is a difference? Let's explore some important display characteristics of your iPhone 7 : a