arkit

SceneKit – Get direction of camera

好久不见. 提交于 2019-12-03 00:44:53
I need to find out which direction a camera is looking at, e.g. if it is looking towards Z+ , Z- , X+ , or X- . I've tried using eulerAngles , but the range for yaw goes 0 -> 90 -> 0 -> -90 -> 0 which means I can only detect if the camera is looking towards Z or X , not if it's looking towards the positive or negative directions of those axes. You can create an SCNNode that place it in worldFront property to get a vector with the x, y, and z direction. Another way you could do it is like how this project did it: // Credit to https://github.com/farice/ARShooter func getUserVector() ->

How to record video from ARKit?

风流意气都作罢 提交于 2019-12-02 22:25:24
Now I'm testing ARKit/SceneKit implementation. The basic rendering to the screen is kinda working so then I wanna try recording what I see on the screen into a video. Just for recording of Scene Kit I found this Gist : // // ViewController.swift // SceneKitToVideo // // Created by Lacy Rhoades on 11/29/16. // Copyright © 2016 Lacy Rhoades. All rights reserved. // import SceneKit import GPUImage import Photos class ViewController: UIViewController { // Renders a scene (and shows it on the screen) var scnView: SCNView! // Another renderer var secondaryRenderer: SCNRenderer? // Abducts image data

ARkit - Loading .scn file from Web-Server URL in SCNScene

两盒软妹~` 提交于 2019-12-02 21:04:38
I am using ARKit for my application and I try to dynamically load .scn files from my web-server(URL) Here is a part of my code let urlString = "https://da5645f1.ngrok.io/mug.scn" let url = URL.init(string: urlString) let request = URLRequest(url: url!) let session = URLSession.shared let downloadTask = session.downloadTask(with: request, completionHandler: { (location:URL?, response:URLResponse?, error:Error?) -> Void in print("location:\(String(describing: location))") let locationPath = location!.path let documents:String = NSHomeDirectory() + "/Documents/mug.scn" ls = NSHomeDirectory() + "

How to transform vision framework coordinate system into ARKit?

耗尽温柔 提交于 2019-12-02 19:29:47
I am using ARKit (with SceneKit) to add the virtual object (e.g. ball). I am tracking real world object (e.g. foot) by using Vision framework and receiving its updated position in vision request completion handler method. let request = VNTrackObjectRequest(detectedObjectObservation: lastObservation, completionHandler: self.handleVisionRequestUpdate) I wants to replace the tracked real world object with virtual (for example replace foot with cube) but I am not sure how to replace the boundingBox rect (which we receive in vision request completion) into scene kit node as coordinate system are

How to track image anchors after initial detection in ARKit 1.5?

妖精的绣舞 提交于 2019-12-02 18:55:52
I'm trying ARKit 1.5 with image recognition and, as we can read in the code of the sample project from Apple : Image anchors are not tracked after initial detection, so create an animation that limits the duration for which the plane visualization appears. An ARImageAnchor doesn't have a center: vector_float3 like ARPlaneAnchor has, and I cannot find how I can track the detected image anchors. I would like to achieve something like in this video , that is, to have a fix image, button, label, whatever, staying on top of the detected image, and I don't understand how I can achieve this. Here is

Check whether the ARReferenceImage is no longer visible in the camera's view

耗尽温柔 提交于 2019-12-02 17:15:23
I would like to check whether the ARReferenceImage is no longer visible in the camera's view. At the moment I can check if the image's node is in the camera's view, but this node is still visible in the camera's view when the ARReferenceImage is covered with another image or when the image is removed. func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) { guard let node = self.currentImageNode else { return } if let pointOfView = sceneView.pointOfView { let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView) print("Is node visible: \(isVisible)") } } So I

Drag object on XZ plane

可紊 提交于 2019-12-02 12:43:28
问题 I am working on an augmented reality app and I would like to be able to drag an object in the space. The problem with the solutions I find here in SO, the ones that suggest using projectPoint / unprojectPoint , is that they produce movement along the XY plane. I was trying to use the fingers movement on the screen as an offset for x and z coordinates of the node. The problem is that there is a lot of stuff to take in consideration (camera's position, node's position, node's rotation, etc..)

Creating path using CGMutablePath creates line to wrong CGPoint

丶灬走出姿态 提交于 2019-12-02 09:19:32
I was planning to display information of AR object in screen with arrow in 2D. So I used projectPoint to get corresponding position of object in screen. I have this function to return convert 3D position of node to 2D and CGPoint to display info text in. func getPoint(sceneView: ARSCNView) -> (CGPoint, CGPoint){ let projectedPoint = sceneView.projectPoint(node.worldPosition) return (point, CGPoint(x: CGFloat(projectedPoint.x), y: CGFloat(projectedPoint.y)) ) } and this to draw line using SpriteKit : let (f,s) = parts[3].getPoint(sceneView: sceneView) line.removeFromParent() let path =

How can I get Camera Calibration Data on iOS? aka AVCameraCalibrationData

做~自己de王妃 提交于 2019-12-02 07:15:40
As I understand it, AVCameraCalibrationData is only available over AVCaptureDepthDataOutput. Is that correct? AVCaptureDepthDataOutput on the other hand is only accessible with iPhone X front cam or iPhone Plus back cam, or am I mistaken? What I am trying to do is to get the FOV of an AVCaptureVideoDataOutput SampleBuffer. Especially, it should match the selected preset (full HD, Photo etc.). rickster You can get AVCameraCalibrationData only from depth data output or photo output. However, if all you need is FOV, you need only part of the info that class offers — the camera intrinsics matrix —

Drag object on XZ plane

北城以北 提交于 2019-12-02 07:13:48
I am working on an augmented reality app and I would like to be able to drag an object in the space. The problem with the solutions I find here in SO, the ones that suggest using projectPoint / unprojectPoint , is that they produce movement along the XY plane. I was trying to use the fingers movement on the screen as an offset for x and z coordinates of the node. The problem is that there is a lot of stuff to take in consideration (camera's position, node's position, node's rotation, etc..) Is there a simpler way of doing this? first you need to create floor or very large plane few meters (i