augmented-reality

ARKit 2.0 – Scanning 3D Object and generating 3D Mesh from it

心不动则不痛 提交于 2019-11-30 15:57:18
The iOS 12 application now allows us to create an ARReferenceObject , and using it, can reliably recognize a position and orientation of real-world object. We can also save the finished .arobject file. But : ARReferenceObject contains only the spatial features information needed for ARKit to recognize the real-world object, and is not a displayable 3D reconstruction of that object. sceneView.session.createReferenceObject(transform: simd_float4x4, center: simd_float3, extent: simd_float3) { (ARReferenceObject?, Error?) in // code } func export(to url: URL, previewImage: UIImage?) throws { } Is

Get screen coordinates by specific location and longitude (android)

旧城冷巷雨未停 提交于 2019-11-30 13:48:25
I have an application of augmented reality in which I have stored information such us metro, gas stations, places of interest, etc. with the corresponding latitude and longitude. Now, according to the orientation of the device, I would show a marker for each site in the camera view of the device. Similar to Layar and Wikitude. It takes three days searching without stopping and have not found anyone to explain how to solve this problem. Since information on this topic is very sparse, and I recently solved this problem on the iPhone, I thought I would share my method for anyone that can make it

Android: Problems calculating the Orientation of the Device

微笑、不失礼 提交于 2019-11-30 13:21:21
问题 i'am trying to build a simple Augmented Reality App, so I start working with sensor Data. According to this thread (Android compass example) and example (http://www.codingforandroid.com/2011/01/using-orientation-sensors-simple.html), the calculation of the orientation using the Sensor.TYPE_ACCELEROMETER and Sensor.TYPE_MAGNETIC_FIELD doesn't really fit. So I'm not able to get "good" values. The azimut values doesn't make any sense at all, so if I just move the Phone upside the value changes

What does the different columns in transform in ARKit represent?

落爺英雄遲暮 提交于 2019-11-30 07:52:51
问题 An ARAnchor has 6 columns of which the last 3 represent the x , y , and z coordinates. I was wondering what the other (first) 3 columns represent? 回答1: If you're new to 3D then these transformation matrices will seem like magic. Basically, every "point" in ARKit space is represented by a 4x4 transform matrix. This matrix describes the distance from the ARKit origin (the point at which ARKit woke up to the world), commonly known as the translation, and the orientation of the device, aka pitch,

Does ARKit 2.0 consider Lens Distortion in iPhone and iPad?

谁说我不能喝 提交于 2019-11-30 07:25:50
ARKit 2.0 updates many intrinsic (and extrinsic) parameters of the ARCamera from frame to frame. I'd like to know if it also takes Radial Lens Distortion into consideration (like in AVCameraCalibrationData class that ARKit doesn't use), and fix the video frames' distortion appropriately ( distort / undistort operations) for back iPhone and iPad cameras? var intrinsics: simd_float3x3 { get } As we all know, the Radial Lens Distortion greatly affects the 6 DOF pose estimation accuracy when we place undistorted 3D objects in distorted by a lens real world scene. var lensDistortionLookupTable:

Understand coordinate spaces in ARKit

北慕城南 提交于 2019-11-30 07:24:07
I've read all Apple guides about ARKit , and watched a WWDC video . But I can't understand how do coordinate systems which are bind to: A real world A device A 3D scene connect to each other. I can add an object, for example a SCNPlane : let stripe = SCNPlane(width: 0.005, height: 0.1) let stripeNode = SCNNode(geometry: stripe) scene.rootNode.addChildNode(stripeNode) This will produce a white stripe, which will be oriented vertically, no matter how the device will be oriented at that moment. That means the coordinate system is somehow bound to the gravity! But if I try to print upAxis

Calculating aspect ratio of Perspective Transform destination image

我只是一个虾纸丫 提交于 2019-11-30 07:20:50
I've recently implemented Perspective Transform in OpenCV to my app in Android . Almost everything works without issues but one aspect needs much more work to be done. The problem is that I do not know how to count the right aspect ratio of the destination image of Perspective Transform (it does not have to be set manually), so that it could count the aspect ratio of the image to the size of the real thing/image despite the angle of a camera . Note that the starting coordinates do not form trapezoid, it does form a quadrangle. If I have a photograph of a book taken from approximately 45

Android: Problems calculating the Orientation of the Device

送分小仙女□ 提交于 2019-11-30 07:20:35
i'am trying to build a simple Augmented Reality App, so I start working with sensor Data. According to this thread ( Android compass example ) and example ( http://www.codingforandroid.com/2011/01/using-orientation-sensors-simple.html ), the calculation of the orientation using the Sensor.TYPE_ACCELEROMETER and Sensor.TYPE_MAGNETIC_FIELD doesn't really fit. So I'm not able to get "good" values. The azimut values doesn't make any sense at all, so if I just move the Phone upside the value changes extremly. Even if I just rotate the phone, the values doesn't represent the phones orientation. Has

Camera pose estimation from homography or with solvePnP() function

流过昼夜 提交于 2019-11-30 07:02:25
问题 I'm trying to build static augmented reality scene over a photo with 4 defined correspondences between coplanar points on a plane and image. Here is a step by step flow: User adds an image using device's camera. Let's assume it contains a rectangle captured with some perspective. User defines physical size of the rectangle, which lies in horizontal plane (YOZ in terms of SceneKit). Let's assume it's center is world's origin (0, 0, 0), so we can easily find (x,y,z) for each corner. User

Overlay Image on moving object in Video (Argumented Reality / OpenCv)

我只是一个虾纸丫 提交于 2019-11-30 05:46:01
问题 I am using FFmpeg to overlay image/emoji on video by this command - "-i "+inputfilePath+" -filter_complex "+"[0][1]overlay=enable='between(t,"+startTime+","+endTime+")'[v1]"+" -map [v0] -map 0:a "+OutputfilePath; But above command only overlay image over video and stays still. In Instagram and Snapchat there is New pin feature. I want exactly same ,eg blur on moving faces or as in below videos - Here is link. Is it possible via FFmpeg ? I think someone with OPENCV or Argumented Reality