arkit

ARSessionConfiguration unresolved in Xcode 9 GM

牧云@^-^@ 提交于 2019-12-22 06:30:08
问题 I have created an ARKit project using a beta version of Xcode 9, which I was able to run on my real device without issues. Yesterday, I upgraded to Xcode 9 GM, and without touching anything, Xcode shows multiple errors, saying it does not know ARSessionConfiguration i.e.: Use of undeclared type 'ARSessionConfiguration' and: Use of undeclared type 'ARWorldTrackingSessionConfiguration' ...for this code: let session = ARSession() var sessionConfig: ARSessionConfiguration =

How do I set compatible devices to only ARKit compatible devices in Xcode?

核能气质少年 提交于 2019-12-22 04:07:38
问题 How can I make sure my app on iOS AppStore only show compatibility for ARKit enabled devices only? 回答1: The key is arkit for your info.plist file under Required device capabilities. Apple documentation on plist keys (UIRequiredDeviceCapabilities). Key : arkit Description : Include this key if your app requires support for ARKit on the device (that is, an iOS device with an A9 or later processor). Minimum version : iOS 11.0 One important caveat for existing apps is that Apple does not allow

Can I get 3d models from web servers on Swift?

◇◆丶佛笑我妖孽 提交于 2019-12-22 00:09:49
问题 I'm working on an application with Arkit. There are many 3D models and the size is big in my app. Can I get these models out of another server (outside sites)? I'm new on swift, I can't seem to find anything on loading a 3d model from a web server. is it enough to change the model path there? Thank you func loadModel() { guard let virtualObjectScene = SCNScene(named: "\(modelName).\(fileExtension)", inDirectory: "Models.scnassets/\(modelName)") else { return } let wrapperNode = SCNNode() for

Can CATransform3D be used to get eye size dimensions in Face Mesh?

给你一囗甜甜゛ 提交于 2019-12-21 16:57:57
问题 I am trying to get width of the eyes and distance of 2 eyes using 3D Face Mesh of ARKit. I have used CATransform3D of ARAnchor ; struct CATransform3D { CGFloat m11, m12, m13, m14; CGFloat m21, m22, m23, m24; CGFloat m31, m32, m33, m34; CGFloat m41, m42, m43, m44; }; Below is my code; func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor else { return } let leftcaTransform3DValue : CATransform3D = (faceAnchor

ARKit - How to contain SCNText within another SCNNode (speech bubble)

落花浮王杯 提交于 2019-12-21 06:23:09
问题 I am trying to create a quote generator with simple text within a speech bubble in ARKit. I can show the speech bubble with text, but the text always starts in the middle and overflows outside of the speech bubble. Any help getting it align in the top left of the speech bubble and wrapping within the speech bubble would be appreciated. Result Classes class SpeechBubbleNode: SCNNode { private let textNode = TextNode() var string: String? { didSet { textNode.string = string } } override init()

ARKit - How to contain SCNText within another SCNNode (speech bubble)

耗尽温柔 提交于 2019-12-21 06:23:06
问题 I am trying to create a quote generator with simple text within a speech bubble in ARKit. I can show the speech bubble with text, but the text always starts in the middle and overflows outside of the speech bubble. Any help getting it align in the top left of the speech bubble and wrapping within the speech bubble would be appreciated. Result Classes class SpeechBubbleNode: SCNNode { private let textNode = TextNode() var string: String? { didSet { textNode.string = string } } override init()

Get All ARAnchors of focused Camera in ARKIT

那年仲夏 提交于 2019-12-21 06:15:48
问题 When application launched first a vertical surface is detected on one wall than camera focus to the second wall, in the second wall another surface is detected. The first wall is now no more visible to the ARCamera but this code is providing me the anchors of the first wall. but I need anchors of the second wall which is right now Visible/focused in camera. if let anchor = sceneView.session.currentFrame?.anchors.first { let node = sceneView.node(for: anchor) addNode(position: SCNVector3Zero,

ARKit vs SceneKit coordinates

删除回忆录丶 提交于 2019-12-20 20:19:19
问题 I'm trying to understand the difference between the different element introduced in ArKit and their maybe equivalents in SceneKit: SCNNode.simdTransform vs SCNNode.transform . In ARKit, it seems that people use SCNNode.simdTransform instead of SCNNode.transform . How do they differ? simdTransform seems to use column major order, while transform (SCNMatrix4) is row major. How do I convert one to the other? Just transpose? I've the impression that the tracking doesn't work as well if I use

SceneKit – Get direction of camera

拥有回忆 提交于 2019-12-20 10:55:54
问题 I need to find out which direction a camera is looking at, e.g. if it is looking towards Z+ , Z- , X+ , or X- . I've tried using eulerAngles , but the range for yaw goes 0 -> 90 -> 0 -> -90 -> 0 which means I can only detect if the camera is looking towards Z or X , not if it's looking towards the positive or negative directions of those axes. 回答1: You can create an SCNNode that place it in worldFront property to get a vector with the x, y, and z direction. Another way you could do it is like

Mapping image onto 3D face mesh

半腔热情 提交于 2019-12-20 10:25:52
问题 I am using the iPhone X and ARFaceKit to capture the user's face. The goal is to texture the face mesh with the user's image. I'm only looking at a single frame (an ARFrame ) from the AR session. From ARFaceGeometry , I have a set of vertices that describe the face. I make a jpeg representation of the current frame's capturedImage . I then want to find the texture coordinates that map the created jpeg onto the mesh vertices. I want to: 1. map the vertices from model space to world space; 2.