realitykit

Dynamically change text of RealityKit entity

时间秒杀一切 提交于 2020-06-25 06:45:33
问题 I have created a very simple scene ("SpeechScene") using Reality Composer, with a single speech callout object ("Speech Bubble") anchored to a Face anchor. I have loaded this scene into code via the following: let speechAnchor = try! Experience.loadSpeechScene() arView.scene.anchors.append(speechAnchor) let bubble = (arView.scene as? Experience.SpeechScene)?.speechBubble It renders as expected. However, I would like to dynamically change the text of this existing entity. I found a similar

Multi-face detection in RealityKit

允我心安 提交于 2020-06-25 05:44:28
问题 I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this: guard ARFaceTrackingConfiguration.isSupported else { return } let configuration = ARFaceTrackingConfiguration() configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces configuration.isLightEstimationEnabled = true arView.session.delegate = self arView.session.run

How do I spin and add a linear force to an Entity loaded from Reality Composer?

倖福魔咒の 提交于 2020-06-13 06:04:26
问题 I've constructed a scene in Reality Composer that has a ball that starts the scene floating in the air. I'm attempting to programmatically throw the ball while simultaneously spinning it. I tried to do this through behaviors in Reality Composer, but can't get both behaviors to work simultaneously, also, the ball immediately falls to the ground once I start the animation. My second attempt was to forgo the behavior route and I attempted to do this programmatically, but I can not add a force,

How do I spin and add a linear force to an Entity loaded from Reality Composer?

风流意气都作罢 提交于 2020-06-13 06:04:06
问题 I've constructed a scene in Reality Composer that has a ball that starts the scene floating in the air. I'm attempting to programmatically throw the ball while simultaneously spinning it. I tried to do this through behaviors in Reality Composer, but can't get both behaviors to work simultaneously, also, the ball immediately falls to the ground once I start the animation. My second attempt was to forgo the behavior route and I attempted to do this programmatically, but I can not add a force,

ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR?

人走茶凉 提交于 2020-06-10 01:42:29
问题 How can I export the ARMeshGeometry generated by the new SceneReconstruction API on the latest iPad Pro to an .obj file? Here's SceneReconstruction documentation. 回答1: Starting with Apple's Visualising Scene Scemantics sample app, you can retrieve the ARMeshGeometry object from the first anchor in the frame. The easiest approach to exporting the data is to first convert it to an MDLMesh: extension ARMeshGeometry { func toMDLMesh(device: MTLDevice) -> MDLMesh { let allocator =

Track camera position with RealityKit

久未见 提交于 2020-05-30 09:39:44
问题 How can you track the position of the camera using RealityKit? Several examples are using SceneKit, but I found none using RealityKit. I need a function such as: func session(_ session: ARSession, didUpdate frame: ARFrame) { // Do something with the new transform let currentTransform = frame.camera.transform doSomething(with: currentTransform) } 回答1: Using ARView Camera Transform: You can access the ARView Camera Transform using the following method: var cameraTransform: Transform The

RealityKit Multipeer Session - Object Sync Issue

久未见 提交于 2020-05-30 07:59:39
问题 I can’t figure out the common objects of 3D models for multipeer sessions (with their synchronization). Design: Users of the application can enter a multipeer session, place objects on the stage, while other peers can see these models, their transformations, interact with these objects themselves and place their own - while also maintaining the ability to interact with their objects to other peers of the session. Questions I have encountered: How to publish one instance of a model for one

ARKit – Viewport Size vs Real Screen Resolution

℡╲_俬逩灬. 提交于 2020-05-28 07:45:09
问题 I am writing an ARKit app that uses ARSCNView hitTest function. Also the app sends captured images to the server for some analysis. I notices when I do: let viewportSize = sceneView.snapshot().size let viewSize = sceneView.bounds.size then the first one is twice as large as the second one. The questions are: 1.Why there is a difference? 2.What "size" (e.g. coordinates) is used in hitTest? 回答1: Why there is a difference? Let's explore some important display characteristics of your iPhone 7 : a

ARKit – Viewport Size vs Real Screen Resolution

淺唱寂寞╮ 提交于 2020-05-28 07:44:40
问题 I am writing an ARKit app that uses ARSCNView hitTest function. Also the app sends captured images to the server for some analysis. I notices when I do: let viewportSize = sceneView.snapshot().size let viewSize = sceneView.bounds.size then the first one is twice as large as the second one. The questions are: 1.Why there is a difference? 2.What "size" (e.g. coordinates) is used in hitTest? 回答1: Why there is a difference? Let's explore some important display characteristics of your iPhone 7 : a

Where is the .camera AnchorEntity located?

佐手、 提交于 2020-05-28 04:27:45
问题 When adding a child to my AnchorEntity(.camera) , it appears as if the child is spawning behind my camera (meaning I can only see my child when I turn around). I have also tried to add a mesh to my Anchor directly but unfortunately ARKit / RealityKit does not render the mesh when you are inside of it (which because its centered around the camera, is theoretically always the case. However, it could also be the case that its always located behind the screen [where the user is] and I'm never