augmented-reality

Vision based Augmented Reality Objective-C library

左心房为你撑大大i 提交于 2019-12-03 14:07:02
问题 I'm looking for an Objective-C library / or just help in building a vision-based augmented reality application that does not rely on visual markers. Qualcomm's is perfect, but only on Android (iOS is coming, but not soon enough). Any body know any other similar libraries? 回答1: QCAR for iOS has recently been released into public beta. 回答2: The only one I'm aware of is String, and I've become aware of that only via this iPhone + Kinect AR video (YouTube link) that recently did the rounds. So I

Fiducial marker detection in the presence of camera shake

旧巷老猫 提交于 2019-12-03 13:43:38
I'm trying to make my OpenCV-based fiducial marker detection more robust when the user moves the camera (phone) violently. Markers are ArTag-style with a Hamming code embedded within a black border. Borders are detected by thresholding the image, then looking for quads based on the found contours, then checking the internals of the quads. In general, decoding of the marker is fairly robust if the black border is recognized. I've tried the most obvious thing, which is downsampling the image twice, and also performing quad-detection on those levels. This helps with camera defocus on extreme

iphone camerOverlay for use with Alternate Reality applications

非 Y 不嫁゛ 提交于 2019-12-03 12:37:38
Does anyone know a way to take an image captured on the iphone's camera, and do some image processing (e.g. edge detection, skeletization), and then overlay parts of the processed image on the original image (e.g. only the highlighted edges). More generically how do I create a UImage with transparency (do I just scale the image and overlay it with an alpha value, does UIImage support transparency like gifs do). I'm thinking that you could combine a UIImagePickerController with a background thread that takes "screenshots" of the UIImagePickerController view and does image processing on it to

ARCore Compatible devices

微笑、不失礼 提交于 2019-12-03 12:16:29
What are the next Android smartphones to be compatible with ARCore? Is there a known list of future compatible devices yet? Maybe a general project schedule? We are about to purchase some units for AR development assessments, at first we thought about trying one of the Tango devices out there (we already had a good experience with Tango), but our current bet is that the ARCore platform will beat it in terms of market share. Currently, the compatible devices are only: Google Pixel Samsung Galaxy S8 (the non-plus version) But obviously, we would prefer to choose from a wider variety (e.g. S8+,

Anyone know of good tutorials for creating an Augmented Reality application from scratch or using an open source framework? [closed]

╄→尐↘猪︶ㄣ 提交于 2019-12-03 10:16:24
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 6 years ago . I am looking for good tutorials that go through every step of creating an AR application. It would be beneficial if it also covers some of the theory behind optics and such. 回答1: A good setup is to use ARToolKit plus osgART. The tutorial of ARToolKit also explains also some of the principles of Computer Vision:

OpenGL + OpenCV Augmented Reality on Android

喜夏-厌秋 提交于 2019-12-03 09:10:30
I am using Vuforia SDK to do AR stuff on Android. For a project I want to track a ball. But with Vuforia this is not possible. So I tried to use OpenCV color detection to track a ball. I adapt this solution for my project. For now I can track and calculate center point(in screen coordinates) of the ball with OpenCV. How can I use OpenCV screen coordinates to do OpenGL AR stuff on Android. Where to start? I combined opencv tracking (using the Camshift algorithm) with OpenGL here: http://www.youtube.com/watch?v=yEioXZT-lv0 My first recommendation is not to try and tackle opengl libraries

How to capture live camera frames in RGB with DirectShow

给你一囗甜甜゛ 提交于 2019-12-03 08:40:31
I'm implementing live video capture through DirectShow for live processing and display. (Augmented Reality app). I can access the pixels easily enough, but it seems I can't get the SampleGrabber to provide RGB data. The device (an iSight -- running VC++ Express in VMWare) only reports MEDIASUBTYPE_YUY2. After extensive Googling, I still can't figure out whether DirectShow is supposed to provide built-in color space conversion for this sort of thing. Some sites report that there is no YUV<->RGB conversion built in, others report that you just have to call SetMediaType on your ISampleGrabber

Angle between 2 GPS Coordinates

瘦欲@ 提交于 2019-12-03 07:50:55
问题 I'm working in another iPhone App that uses AR, and I'm creating my own framework, but I'm having trouble trying to get the angle of a second coordinate relative to the current device position, anyone know a good resource that could help me with this? Thanks in advance! 回答1: If the two points are close enough together, and well away from the poles, you can use some simple trig: float dy = lat2 - lat1; float dx = cosf(M_PI/180*lat1)*(long2 - long1); float angle = atan2f(dy, dx); EDIT: I forgot

Find set of latitudes and longitudes using user's current latitude, longitude and direction of viewing an object

ⅰ亾dé卋堺 提交于 2019-12-03 07:44:51
I am building an Android application based on Augmented Reality. The main idea is when user opens up my application, by default device's camera starts in preview mode. Based on user's current GPS location and the direction in which user/camera is facing I want to calculate which are the set of latitudes and longitudes in the range? Following image explains my scenario very well. I have full set of latitudes and longitudes as drawn all black spots in the figure. Now suppose user is at the center of the circle. Also considering that he is viewing in North direction. If we consider an angle of 45

How to improve accuracy of accelerometer and compass sensors?

梦想的初衷 提交于 2019-12-03 05:19:52
问题 i am creating an augmented reality app that simply visualices a textview when the phone is facing a Point of Interest (wich gps position is stored on the phone). The textview is painted on the point of interest location in the screen. It works ok, the problem is that compass and accelerometer are very "variant", and the textview is constantly moving up and down left and right because the innacuracy of the sensors. there is a way to solve it? 回答1: Our problem is same. I also had same problem