augmented-reality

Android simple augmented reality with GPS

非 Y 不嫁゛ 提交于 2019-12-11 00:49:18
问题 I want to develop a simple AR android app. I was able to find code to get the: azimuth,pitch and roll, and think that I got it right. I can't find how to be able to display an image on top of the "camera preview" according to a GPS location. I have my (latitude,longitude) coordinates and a set of other (latitude,longitude) coordinates . What I want is the ability to display "markers" on those coordinates when the user points the camera at them. How do I combine the coordinates and azimuth

Replace node renderable ( same rotation, position and scale ) with another node renderable in Sceneform sdk

不打扰是莪最后的温柔 提交于 2019-12-10 23:33:47
问题 I am new to sceneform sdk for Android . I have added one Transformable Node , then i applied some rotation , scaling and changed its position also. Now on click of button i need to place second node with same rotation , scaling and position. For that what i did is: Node nodeTwo = new Node(); // second node nodeTwo.setLocalPosition(nodeOne); nodeTwo.setLocalRotation(nodeOne); nodeTwo.setLocalScale(nodeOne); nodeTwo.setRenderable(renderable); I have also tried with setWorldPosition() ,

Using ARKit to capture high quality photos

六月ゝ 毕业季﹏ 提交于 2019-12-10 22:28:03
问题 I am interested in using ARKit's ability to track the phone's position to automatically take photos using the camera. My initial investigation led to me to understand that while ARKit is using the camera, it is not possible to get high-quality images using the standard AVFoundation methods (due to the camera being in use). I understand I can use sceneView.snapshot() , but the best quality this can provide is 1080p, which isn't high enough quality to use for my application. My question is, are

Render 3d Objects into Cameraview

一笑奈何 提交于 2019-12-10 17:49:02
问题 I tried to develop a mobile cardboard application, which renders 3d objects into a camera view (some kind of ar). I used this project and tried to render a simple cube in the camera: https://github.com/Sveder/CardboardPassthrough/ I didn't get it working, the background is always black or the app wrecked. I would be very grateful for any help or suggestions. Thanks Thats what i have Origin CardboardPassthrough here is the working code, with the cubes import android.content.Context; import

Only able to detect and track up to 4 images at a time with ARKit 3.0

这一生的挚爱 提交于 2019-12-10 13:27:22
问题 Using the code below I'm only able to detect and track up to 4 images at any one time when using ARKit. ARImageTrackingConfiguration *configuration = [ARImageTrackingConfiguration new]; configuration.trackingImages = [ARReferenceImage referenceImagesInGroupNamed:@"AR Resources" bundle:nil]; configuration.maximumNumberOfTrackedImages = 100; [self.sceneView.session runWithConfiguration:configuration]; Is anyone able to confirm what I'm seeing? I need to be able to track a larger number of

Run and Pause an ARSession in a specified period of time

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-10 12:27:52
问题 I'm developing ARKit / Vision iOS app with gesture recognition. My app has a simple UI containing single UIView . There's no ARSCNView / ARSKView at all. I'm putting a sequence of captured ARFrames into CVPixelBuffer what then I use for VNRecognizedObjectObservation . I don't need any tracking data from a session. I just need currentFrame.capturedImage for CVPixelBuffer . And I need to capture ARFrames at 30 fps. 60 fps is excessive frame rate. preferredFramesPerSecond instance property is

Augment reality like zookazam

梦想与她 提交于 2019-12-10 12:25:20
问题 What algorithms are used for augmented reality like zookazam ? I think it analyze image and find planes by contrast, but i don't know how. What topics should I read before starting with app like this? 回答1: [Prologue] This is extremly broad topic and mostly off topic in it's current state. I reedited your question but to make your question answerable within the rules/possibilities of this site You should specify more closely what your augmented reality: should do adding 2D/3D objects with

3d object recognition for AR android app

大兔子大兔子 提交于 2019-12-10 12:19:21
问题 I'm trying to develop an AR android application. it should detect and recognize the object captured by the camera, I'm using OpenCV for this purpose, but I'm not very familiar with object recognition for mobile devices in the AR field. I have two questions: 1- which algorithm is better (in the meaning of precision and speed) SIFT, SURF, FAST, ORB, or something else? 2- I wonder if the process of detecting and tracking would be something like this : taking a camera frame, detect its key points

Set text programmatically of an Entity in Reality Composer - iOS 13

旧巷老猫 提交于 2019-12-10 11:09:16
问题 In my iOS app I want to introduce a part of AR using the new Reality Composer. In my project I load a scene with this code: let arView = ARView.init(frame: frame) // Configure the AR session for horizontal plane tracking. let arConfiguration = ARWorldTrackingConfiguration() arConfiguration.planeDetection = .horizontal arView.session.run(arConfiguration) arView.session.delegate = self self.view.addSubview(arView) Experience.loadSceneAsync{ [weak self] scene, error in print("Error \(String

How to transform a 3D model for Augmented Reality application using OpenCV Viz and ARUCO

笑着哭i 提交于 2019-12-10 10:17:26
问题 I'm developing a simple marker based augmented reality application with OpenCV Viz and ARUCO . I just want to visualize a 3D object (in PLY format) on a marker. I can run marker detection and pose estimation (returning rotation and translation vectors) with ARUCO without a problem. And I can visualize any 3D object (PLY format) and camera frames in Viz window. However, I stuck in using rotation and translation vector outputs from ARUCO to localize 3D model on the marker. I'm creating an