augmented-reality

How to use environment map in ARKit?

[亡魂溺海] 提交于 2019-12-04 11:04:07
ARKit 2.0 added a new class named AREnvironmentProbeAnchor. Reading it's instructions, it seems that ARKit can automatically collect environment texture (cubemap?). I believe that we can now create some virtual objects reflecting the real environment. But I am still not clear how this work, particularly how the environment texture is generated. Does anyone have simple sample code demonstrating this cool feature? Its pretty simple to implement environmentTexturing in your AR project. Set the environmentTexturing property on your tracking configuration to automatic (ARKit takes the video feed

Compare device 3D orientation with the sun position

假如想象 提交于 2019-12-04 10:24:43
I am working on an app that requires the user to aim his iPhone to the sun in order to trigger a special event. I can retrieve the device 3D orientation quaternion based on the gyroscope and CoreMotion framework, from this I can get the yaw, roll and pitch angles. I can also compute the sun's azimuth and zenith angle based on the current date and time (GMT) and the latitude and longitude. What I am trying to figure out next is how to compare these two sets of values (phone orientation and sun position) to accurately detect the alignment of the device with the sun. Any ideas on how to achieve

Detecting if an tap event with ARCore hits an already added 3d object

廉价感情. 提交于 2019-12-04 07:22:18
I am following the ARCore sample ( https://github.com/google-ar/arcore-android-sdk ) and I am trying to remove object 3d (andy) already added. How can I detect if an tap event with ARCore hits an already added 3d object? Using a listener is quite common approach in such situation: private Node getModel() { Node node = new Node(); node.setRenderable(modelRenderable); Context cont = this; node.setOnTapListener((v, event) -> { Toast.makeText( cont, "Model was touched", Toast.LENGTH_LONG) // Toast Notification .show(); }); return node; } I had the same question these days, I tried 2 solutions, 1.

How to augment cube onto a specific position using 3x3 homography

十年热恋 提交于 2019-12-04 06:11:18
I am able to track 4 coordinates over different images of the same scene by calculating a 3x3 homography between them. Doing this I can overlay other 2D images onto these coordinates. I am wondering if I could use this homography to augment a cube onto this position instead using opengl? I think the 3x3 matrix doesn't give enough information but if I know the camera calibration matrix can I get enough to create a model view matrix to do this? Thank you for any help you can give. If you have the camera calibration matrix (intrinsic parameters) and the homography, since the homography (between

Starting a augmented reality (AR) app like Panasonic VIERA AR Setup Simulator

拥有回忆 提交于 2019-12-03 21:49:41
Im looking to create an iOS app similar to Panasonic VIERA AR Setup Simulator but with other products. I was trying to figure conceptually how to do this, and where would be a good place to start. any ideas or suggestions would be greatly appreciated. Panasonic VIERA AR Setup Simulator http://itunes.apple.com/us/app/panasonic-viera-ar-setup-simulator/id405903358?mt=8 AR apps like the one you pointed to tend to require you to print things out. They tend to use the Qualcomm AR SDK, which is probably the most advanced AR engine around. The only problem is that Qualcomm AR SDK is still in beta

Installing Vuforia in Android Studio

瘦欲@ 提交于 2019-12-03 21:19:58
问题 Can anyone give me some instructions of how I'm supposed to install Vuforia in Android Studio? I'm making a new app and I need to use augmented reality with Vuforia. Hope you can help me! Thanks a lot! 回答1: You need to follow following Steps: Read our Getting Started Guide for instructions on setting up the Java SDK, Android SDK and NDK: https://developer.vuforia.com/downloads/sdk Make sure you have installed the latest version available of Android Studio from: http://developer.android.com

iPhone SDK - get/calculate camera field of view (FOV) (Augmented Reality)

假如想象 提交于 2019-12-03 19:33:00
问题 Is there a way to find out or calculate the field of view (FOV) an iPhone camera has through calls to the APIs? Or is it something you have to physically and manually find out for yourself? If it cannot be fetched or calculated with the APIs, but instead has to be hard-coded into an app, then what's the best way to find out what kind of device an app is running on? Different devices have different FOV (iPhone 4 has larger FOV than previous versions.) Also, how big are the FOVs of each device,

How to draw a Perspective-Correct Grid in 2D

谁都会走 提交于 2019-12-03 18:56:43
问题 I have an application that defines a real world rectangle on top of an image/photograph, of course in 2D it may not be a rectangle because you are looking at it from an angle. The problem is, say that the rectangle needs to have grid lines drawn on it, for example if it is 3x5 so I need to draw 2 lines from side 1 to side 3, and 4 lines from side 2 to side 4. As of right now I am breaking up each line into equidistant parts, to get the start and end point of all the grid lines. However the

Iphone 6 camera calibration for OpenCV

人走茶凉 提交于 2019-12-03 16:38:29
Im developing an iOS Augmented Reality application using OpenCV. I'm having issues creating the camera projection matrix to allow the OpenGL overlay to map directly on top of the marker. I feel this is due to my iPhone 6 camera not being correctly calibrated to the application. I know there is OpenCV code to calibrate webcams etc using the chess board, but I can't find a way to calibrate my embedded iPhone camera. Is there a way? Or are there known estimate values for iPhone 6? Which include: focal length in x and y, primary point in x and y, along with the distortion coefficient matrix. Any

Extrinsic Calibration With cv::SolvePnP

你说的曾经没有我的故事 提交于 2019-12-03 15:05:30
I'm currently trying to implement an alternate method to webcam-based AR using an external tracking system. I have everything in my environment configured save for the extrinsic calibration. I decided to use cv::solvePnP() as it supposedly does pretty much exactly I want, but after two weeks I am pulling my hair out trying to get it to work. A diagram below shows my configuration. c1 is my camera, c2 is the optical tracker I'm using, M is the tracked marker attached to the camera, and ch is the checkerboard. As it stands I pass in my image pixel coordinates acquired with cv: