augmented-reality

How do you attach an object to your camera position with ARKit Swift?

北战南征 提交于 2020-05-09 18:55:05
问题 I have moving objects which I want to have be able to collide with me the player. I have the ability to launch objects from me by getting my current position/direction at that time, but I do not understand how to attach an object to me which will follow my positioning at all times. 回答1: In SceneKit, everything that can have a position in the scene is (attached to) a node. That includes not just visible objects, but also light sources and cameras. When you use ARSCNView , there's still a

Device camera direction excluding device landscape/portrait orientation

对着背影说爱祢 提交于 2020-04-13 06:23:30
问题 I need to get the direction of the front facing camera excluding the devices orientation (landscape/portrait). I tried to represent this using Core Motion and accessing device attitude. Here I tried to access the Euler angles and exclude the yaw, however this doesn't seem to work as when rotating the device more that one Euler angle value changes. I am also considering using the orientation quaternion but I don't have experience using them. I need this information in a serialisable manner as

How to monitor my current position in ARCore and Sceneform?

自古美人都是妖i 提交于 2020-03-26 03:17:09
问题 In application with ARCore and Sceneform I need somehow monitor my (device, really) movement in ARCore space? As result I want to draw a ray from selected point ( Anchor / AnchorNode ) through my current position, or to calculate distance from selected point to here, and update them during movement. I have ideas, how to calculate or draw, but how to get updates? 回答1: First setup an On Update listener fragment.getArSceneView().getScene().addOnUpdateListener(frameTime -> { fragment.onUpdate

Object detection ARKit vs CoreML

随声附和 提交于 2020-03-20 07:55:33
问题 I am building ARKit application for iPhone. I need to detect specific perfume bottle and display content depending on what is detected. I used demo app from developer.apple.com to scan real world object and export .arobject file which I can use in assets. It's working fine, although since bottle is from glass detection is very poor. It detects only in location where scan was made in range from 2-30 seconds or doesn't detect at all. Merging of scans doesn't improve situation, something making

RealityKit - Animate opacity of a ModelEntity?

怎甘沉沦 提交于 2020-02-28 08:48:47
问题 By setting the color of a material on the model property of a ModelEntity , I can alter the opacity/alpha of an object. But how do you animate this? My goal is to animate objects with full opacity, then have them fade to a set opacity, such as 50%. With SCNAction.fadeOpacity on a SCNNode in SceneKit , this was particularly easy. let fade = SCNAction.fadeOpacity(by: 0.5, duration: 0.5) node.runAction(fade) An Entity conforms to HasTransform , but that will only allow you to animate scale,

How to use Raycast methods in RealityKit?

廉价感情. 提交于 2020-02-23 05:23:53
问题 There are three ways about Detecting Intersections in RealityKit framework, but I don't know how to use it in my project. 1. func raycast(origin: SIMD3<Float>, direction: SIMD3<Float>, length: Float, query: CollisionCastQueryType, mask: CollisionGroup, relativeTo: Entity?) -> [CollisionCastHit] 2. func raycast(from: SIMD3<Float>, to: SIMD3<Float>, query: CollisionCastQueryType, mask: CollisionGroup, relativeTo: Entity?) -> [CollisionCastHit] 3. func convexCast(convexShape: ShapeResource,

How to use Raycast methods in RealityKit?

前提是你 提交于 2020-02-23 05:23:19
问题 There are three ways about Detecting Intersections in RealityKit framework, but I don't know how to use it in my project. 1. func raycast(origin: SIMD3<Float>, direction: SIMD3<Float>, length: Float, query: CollisionCastQueryType, mask: CollisionGroup, relativeTo: Entity?) -> [CollisionCastHit] 2. func raycast(from: SIMD3<Float>, to: SIMD3<Float>, query: CollisionCastQueryType, mask: CollisionGroup, relativeTo: Entity?) -> [CollisionCastHit] 3. func convexCast(convexShape: ShapeResource,

How to load SCN or glTF model at runtime in ARKit app?

情到浓时终转凉″ 提交于 2020-02-23 04:00:43
问题 What is the best way to load a 3d model from a URL inside of iOS at runtime. I have tried this .scn and .gtlf models importer. I am using this framework https://github.com/prolificinteractive/SamMitiAR-iOS I load the model like this: let virtualObjectGLTFNode = SamMitiVirtualObject(gltfUrl: URL(string: "https://raw.githubusercontent.com/KhronosGroup/glTF-Sample- Models/master/2.0/Duck/glTF-Embedded/Duck.gltf")!, allowedAlignments: [.horizontal]) virtualObjectGLTFNode.name = "Duck"

Deferred Shadow doesn't work if detecting Multiple Transparent Planes

↘锁芯ラ 提交于 2020-02-22 07:46:44
问题 In my code, I detect the plane and show shadow for the object above the plane. If there is one plane, it works fine, but if it detects multiple planes, the redundant shadow will show. As the picture shows, on the plane #1, the shadow is right, but if I add another plane #2, the plane #2 has the wrong shadow, even if I remove the airplane, the shadow on plane #1 disappears, but the shadow on plane #2 is still there. I don't want to remove the plane #2, but how to remove the wrong shadow on

Can I track more than 4 images at a time with ARKit?

℡╲_俬逩灬. 提交于 2020-02-14 13:04:03
问题 Out of the box it's pretty clear ARKit doesn't allow for the tracking of more than 4 images at once. (You can "track" more markers than that but only 4 will function at a time). See this question for more details on that. However, I'm wondering if there is a possible work-around. Something like adding and removing anchors on a timer or getting the position information and then displaying the corresponding models without ARKit, etc. My knowledge of Swift is fairly limited so I haven't had much