augmented-reality

Augmented Faces API – How facial landmarks generated?

守給你的承諾、 提交于 2019-12-10 03:59:54
问题 I'm an IT student, and would like to know (understand) more about the Augmented Faces API in ARCore. I just saw the ARCore V1.7 release, and the new Augmented Faces API. I get the enormous potential of this API. But I didn't see any questions or articles on this subject. So I'm questioning myself, and here are some assumptions / questions which come to my mind about this release. Assumption ARCore team are using (Like Instagram and Snapchat) machine learning, to generate landmarks all over

Saving an Image to Photos Folder In hololens App

大憨熊 提交于 2019-12-09 19:12:39
问题 I'm attempting to capture a photo inside my hololens app. It seems to be working but saving the image to an obscure place that I cant access or view. I want to save it to my pictures library Described here I think. Or where should i save the image so I can see it in my photos on the hololens. filePath = C:/Data/Users/JanikJoe/AppData/Local/Packages/HoloToolkit-Unity_pzq3xp76mxafg/LocalState\CapturedImage10.02831_n.jpg filePath2 = C:/Data/Users/DefaultAccount/AppData/Local/DevelopmentFiles

Compass direction is different depending on phone orientation

a 夏天 提交于 2019-12-09 17:58:29
问题 My augmented reality app needs the compass bearing of the camera view, and there's plenty of examples of getting the direction from the sensormanager. However I'm finding the resulting value different depending on the phone orientation - landscape rotated to right is about 10 degrees different to landscape rotated to left (difference between ROTATION_0 and ROTATION_180 is less, but still different). This difference is enough to ruin any AR effect. Is it something to do with calibration? (I'm

iphone camerOverlay for use with Alternate Reality applications

耗尽温柔 提交于 2019-12-09 09:43:49
问题 Does anyone know a way to take an image captured on the iphone's camera, and do some image processing (e.g. edge detection, skeletization), and then overlay parts of the processed image on the original image (e.g. only the highlighted edges). More generically how do I create a UImage with transparency (do I just scale the image and overlay it with an alpha value, does UIImage support transparency like gifs do). I'm thinking that you could combine a UIImagePickerController with a background

Find set of latitudes and longitudes using user's current latitude, longitude and direction of viewing an object

半腔热情 提交于 2019-12-09 06:36:15
问题 I am building an Android application based on Augmented Reality. The main idea is when user opens up my application, by default device's camera starts in preview mode. Based on user's current GPS location and the direction in which user/camera is facing I want to calculate which are the set of latitudes and longitudes in the range? Following image explains my scenario very well. I have full set of latitudes and longitudes as drawn all black spots in the figure. Now suppose user is at the

Markerless AR lib for iPhone

瘦欲@ 提交于 2019-12-09 00:23:36
问题 I'm searching a functional AR Markerless library for the iPhone (from 3GS and supporting iOS 4.3 at least). I've already tested a large amount of SDKs including Qualcomm AR, Layar, ARToolkit, but none of them was satisfying my needs. To be more precise, I need neither a localization-based AR technology (Layar), nor a marker technology (ARToolkit). If possible, the library has to be free, as I don't have much financial resources. 回答1: Qualcomm ( QCAR ) has recently released an iOS version into

Occlusion of real-world objects using three.js

只谈情不闲聊 提交于 2019-12-08 21:00:41
问题 I’m using three.js inside an experimental augmented-reality web browser. (The browser is called Argon. Essentially, Argon uses Qualcomm’s Vuforia AR SDK to track images and objects in the phone camera. Argon sends the tracking information into Javascript, where it uses transparent web pages with three.js to create 3D graphics on top of the phone video feed.) My question is about three.js, however. The data Argon sends into the web page allows me to align the 3D camera with the physical phone

How can I pick item from collectionView and add it to SCNScene?

对着背影说爱祢 提交于 2019-12-08 13:43:12
问题 I am working with SceneKit and ARKit. I have made a collectionView with an array of emoji's. Now I want the user to be able to select the emoji from collectionView and when he/she touches the screen that selected emoji will be placed in 3D. How can I do that? I think I have to create a function for the Node, but still my idea is blurry in the mind and I am not very much clear. 回答1: As far as any emoji is a 2D element, it's better to use a SpriteKit framework to upload them, not a SceneKit.

how to add 3d models dynamically in glsurface view renderer in android

扶醉桌前 提交于 2019-12-08 12:13:13
问题 In my Augmented reality application I need to render 3D model over a marker. with predefined/ initialized 3d model i can show teapot over a marker detecion. but now I want to replace it with another 3d model dynamically from sd card on some trigger event like button click. is there any suggestion or guideline how i can implement it? I am using JPCT-AE for 3d models. Thanx 回答1: After so much research and trial and errors finally i got it to work. When I asked this question I wanted to display

output from solvePnP doesn't match projectPoints

♀尐吖头ヾ 提交于 2019-12-08 09:08:59
问题 I get strange data from solvePnP, so I tried to check it with projectPoints: retval, rvec, tvec=cv2.solvePnP(opts, ipts, mtx, dist, flags=cv2.SOLVEPNP_ITERATIVE) print(retval,rvec,tvec) proj, jac = cv2.projectPoints(opts, rvec, tvec, mtx, dist) print(proj,ipts) here opts are 3d points with z=0, detected on this picture: And ipts are taken from this pic (only part of picture here): I've checked points themselves (detected with SIFT, points are detected correctly and pairing in a right way).