arkit

Swift: Accessing the 1220 face vertices and save the vertices from AR face

∥☆過路亽.° 提交于 2020-06-16 18:11:29
问题 I am trying to get the 1220 vertices data and save them to the iPhone file. The problem I am having is I could not get the captured data and write them into the JSON structure correctly. I have: struct CaptureData { var vertices: [SIMD3<Float>] var verticesFormatted : String { let v = "<" + vertices.map{ "\($0.x):\($0.y):\($0.z)" }.joined(separator: "~") + "~t:\(String(Double(Date().timeIntervalSince1970)))>" return "\(v)" } var jsonDict:Dictionary<String, Any> = [ "facetracking_data" : "1",

iPad Pro Lidar - Export Geometry & Texture

橙三吉。 提交于 2020-06-16 17:31:33
问题 I would like to be able to export a mesh and texture from the iPad Pro Lidar. There's examples here of how to export a mesh, but Id like to be able to export the environment texture too ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR? ARMeshGeometry stores the vertices for the mesh, would it be the case that one would have to 'record' the textures as one scans the environment, and manually apply them? This post seems to show a way to get texture co-ordinates, but I can't see a way

How do I spin and add a linear force to an Entity loaded from Reality Composer?

倖福魔咒の 提交于 2020-06-13 06:04:26
问题 I've constructed a scene in Reality Composer that has a ball that starts the scene floating in the air. I'm attempting to programmatically throw the ball while simultaneously spinning it. I tried to do this through behaviors in Reality Composer, but can't get both behaviors to work simultaneously, also, the ball immediately falls to the ground once I start the animation. My second attempt was to forgo the behavior route and I attempted to do this programmatically, but I can not add a force,

How do I spin and add a linear force to an Entity loaded from Reality Composer?

风流意气都作罢 提交于 2020-06-13 06:04:06
问题 I've constructed a scene in Reality Composer that has a ball that starts the scene floating in the air. I'm attempting to programmatically throw the ball while simultaneously spinning it. I tried to do this through behaviors in Reality Composer, but can't get both behaviors to work simultaneously, also, the ball immediately falls to the ground once I start the animation. My second attempt was to forgo the behavior route and I attempted to do this programmatically, but I can not add a force,

FaceTracking in ARKit – How to display the “lookAtPoint” on the screen

你离开我真会死。 提交于 2020-06-10 19:21:08
问题 The ARFaceTrackingConfiguration of ARKit places ARFaceAnchor with information about the position and orientation of the face onto the scene. Among others, this anchor has the lookAtPoint property that I'm interested in. I know that this vector is relative to the face. How can I draw a point on the screen for this position, meaning how can I translate this point's coordinates? 回答1: .lookAtPoint instance property is for direction's estimation only. Apple documentation says: .lookAtPoint is a

FaceTracking in ARKit – How to display the “lookAtPoint” on the screen

心不动则不痛 提交于 2020-06-10 19:13:48
问题 The ARFaceTrackingConfiguration of ARKit places ARFaceAnchor with information about the position and orientation of the face onto the scene. Among others, this anchor has the lookAtPoint property that I'm interested in. I know that this vector is relative to the face. How can I draw a point on the screen for this position, meaning how can I translate this point's coordinates? 回答1: .lookAtPoint instance property is for direction's estimation only. Apple documentation says: .lookAtPoint is a

ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR?

人走茶凉 提交于 2020-06-10 01:42:29
问题 How can I export the ARMeshGeometry generated by the new SceneReconstruction API on the latest iPad Pro to an .obj file? Here's SceneReconstruction documentation. 回答1: Starting with Apple's Visualising Scene Scemantics sample app, you can retrieve the ARMeshGeometry object from the first anchor in the frame. The easiest approach to exporting the data is to first convert it to an MDLMesh: extension ARMeshGeometry { func toMDLMesh(device: MTLDevice) -> MDLMesh { let allocator =

Implementing Codable for ARAnchor: “cannot be automatically synthesized in an extension…”

喜欢而已 提交于 2020-06-08 13:17:04
问题 The code extension ARAnchor: Codable {} produces the error: "Implementation of 'Decodable' cannot be automatically synthesized in an extension in a different file to the type". What does this mean? I was able to implement Codable for another native type in a similar fashion without any errors. 回答1: You could create a container object that implements Codable and then use that to encode and decode the anchor. I tried this code in a playground and it work for me. You'll want to adapt it for

Track camera position with RealityKit

久未见 提交于 2020-05-30 09:39:44
问题 How can you track the position of the camera using RealityKit? Several examples are using SceneKit, but I found none using RealityKit. I need a function such as: func session(_ session: ARSession, didUpdate frame: ARFrame) { // Do something with the new transform let currentTransform = frame.camera.transform doSomething(with: currentTransform) } 回答1: Using ARView Camera Transform: You can access the ARView Camera Transform using the following method: var cameraTransform: Transform The

RealityKit Multipeer Session - Object Sync Issue

久未见 提交于 2020-05-30 07:59:39
问题 I can’t figure out the common objects of 3D models for multipeer sessions (with their synchronization). Design: Users of the application can enter a multipeer session, place objects on the stage, while other peers can see these models, their transformations, interact with these objects themselves and place their own - while also maintaining the ability to interact with their objects to other peers of the session. Questions I have encountered: How to publish one instance of a model for one