Are there any limitations in Vuforia compared to ARCore and ARKit?

删除回忆录丶 提交于 2019-12-18 10:29:21

问题


I am a beginner in the field of augmented reality, working on applications that create plans of buildings (floor plan, room plan, etc with accurate measurements) using a smartphone. So I am researching about the best AR SDK which can be used for this. There are not many articles pitting Vuforia against ARCore and ARKit.

Please suggest the best SDK to use, pros and cons of each.


回答1:


Updated: October 30, 2019.

TL;DR

Google ARCore allows you build apps for Android and iOS, with Apple ARKit you can build apps for iOS and iPadOS, and great old PTC Vuforia was designed to create apps for Android and iOS as well.

A crucial peculiarity of Vuforia is that it uses ARCore/ARKit technology if the hardware it is running on supports it, otherwise it uses its own AR technology and engine, known as software solution without dependant hardware.

However, when developing for Android OEM smartphones, you may encounter with an unpleasant problem: devices from different manufacturers need a sensors’ calibration in order to observe the same AR experience. Luckily, Apple gadgets have no such drawback because all sensors used there were calibrated under identical conditions. 



But to answer this question, let’s put first things first.

ARCore 1.13

ARCore is based on the three main fundamental concepts : Motion Tracking, Environmental Understanding and Light Estimation. Thus ARCore allows a supported mobile device to track its position and orientation relative to the world in 6 degrees of freedom (6DOF) using special technique called Concurrent Odometry and Mapping. COM also helps us detect the size and location of horizontal, vertical and angled tracked surfaces (like ground, tables, benches, walls, slopes, etc). Motion Tracking works robustly thanks to optical data coming from a camera at 60 fps, combined with inertial data coming from gyroscope and accelerometer at 1000 fps. Naturally, ARKit and Vuforia operate approximately the same way.



When you move your phone through the real environment, ARCore tracks a surrounding space to understand where a smartphone is, relative to the world coordinates. At tracking stage ARCore "sows" so called feature points that form a sparse point cloud and that cloud lives while tracking session is active. These feature points are visible through rear RGB camera, and ARCore uses them to compute phone's change in location. The visual information then must be combined with measurements from the accelerometer and gyroscope (Inertial Measurement Unit) to estimate the position and orientation of the ArCamera over time. ARCore looks for clusters of feature points that appear to lie on horizontal, vertical or angled surfaces and makes these surfaces available to your app as planes (we call this technique as plane detection). So, now you can use these planes to place 3D objects in your scene. After this, virtual geometry with assigned shaders will be rendered by ARCore's companion – Sceneform (supporting OBJ, FBX and glTF assets), that uses a real-time Physically Based Rendering (a.k.a. PBR) engine – Filament.


ARCore's environmental understanding lets you place 3D objects and 2D annotations in a way that integrates with the real world. For example, you can place a virtual cup of coffee on the corner of your real-world table using ArAnchor.


ARCore can also define lighting parameters of a real environment and provide you with the average intensity and color correction of a given camera image. This data lets you light your virtual scene under the same conditions as the environment around you, considerably increasing the sense of realism.



Previous major updates brought in ARCore such the significant APIs as Lighting Estimation with Environmental HDR mode, Augmented Faces, Augmented Images, Sceneform Animations, Cloud Anchors and Multiplayer support. The main advantage of ARCore in Android Studio over ARKit in Xcode is Android Emulator allowing you run and debug AR apps using virtual device.



ARCore is older than ARKit. Do you remember Project Tango released in 2014? Roughly speaking, ARCore is just a rewritten Tango SDK without a depth camera usage. But a wise acquisition of FlyBy and MetaIO helped Apple to catch up. I suppose it is extremely good for AR industry.

The latest version of ARCore requires Android 7.0 Nougat or later, supports OpenGL ES 3.1 acceleration, and integrates with Unity, Unreal, and Web applications. At the moment the most powerful and energy efficient chipsets for AR experience on Android platform are Kirin 980 (7nm), Snapdragon 855 Plus (7nm) and Exynos 9825 (7nm). It's a pity but Kirin 990 isn't supported.

ARCore price: FREE.

|-----------------------------------|-----------------------------------|
|            ARCore PROS            |            ARCore CONS            | 
|-----------------------------------|-----------------------------------|
| //  Quick Plane Detection         |  Cloud Anchors are hosted online  |
|-----------------------------------|-----------------------------------|
| //  Long-distance-accuracy        |  Lack of rendering technologies   |
|-----------------------------------|-----------------------------------|
| //  Sloping Surfaces Detection    |  Poor developer documentation     | 
|-----------------------------------|-----------------------------------|
| //  High-quality Lighting API     |  No external camera support       |
|-----------------------------------|-----------------------------------|
| //  A lot of supported devices    |  No Z-Depth compositing for model |
|-----------------------------------|-----------------------------------|

Here's ARCore code's snippet written in Kotlin:

private fun addNodeToScene(fragment: ArFragment, anchor: Anchor, renderable: Renderable) {

    val anchorNode = AnchorNode(anchor)
    anchorNode.setParent(fragment.arSceneView.scene)

    val modelNode = TransformableNode(fragment.transformationSystem)
    modelNode.setParent(anchorNode)
    modelNode.setRenderable(renderable)
    modelNode.localPosition = Vector3(0.0f, 0.0f, -3.0f)
    fragment.arSceneView.scene.addChild(anchorNode)

    modelNode.select()
}

ARKit 3.0

Like its competitor, ARKit also uses special technique, called Visual Inertial Odometry, to very accurately track the world around your phone. VIO is quite similar to COM found in ARCore. There are also three similar fundamental concepts in ARKit: World Tracking, Scene Understanding (which includes three stages: Plane Detection, Hit-Testing and Light Estimation), and Rendering with a great help of ARKit companions – SceneKit framework, that’s actually an Apple 3D game engine since 2012, RealityKit framework specially made for AR and written in Swift from scratch, and SpriteKit framework with its 2D engine.

VIO fuses RGB sensor data at 60 fps with Core-Motion data (IMU) at 1000 fps. In addition to that, SceneKit, for example, can render all the 3D geometry at 30/60/120 fps. So, under such circumstances, I think it should be noted that due to a very high energy impact (because of an enormous burden on CPU and GPU), your iPhone's battery will be drained pretty quickly.

ARKit has a handful of useful methods for robust tracking and accurate measurements. Among its arsenal you can find easy-to-use functionality for saving and retrieving ARWorldMaps. World map is an indispensable "portal" for Persistent and Multiuser AR experience that allows you to come back to the same environment filled with the same chosen 3D content just before the moment your app became inactive. Support for simultaneous front and back camera capture and support for collaborative sessions that allows us to share World Maps are also great.

There are good news for gamers: up to 6 people are simultaneously able to play the same AR game, thanks to MultipeerConnectivity framework. For 3D geometry you could use a brand-new USDZ file format, developed and supported by Pixar, that is good choice for sophisticated 3D models with lots of PBR shaders and animations. Also you can use the following 3D formats for ARKit.

ARKit not only can help you track a position and orientation of your device relative to the world in 6DOF, but also help you perform People Occlusion technique (based on alpha and depth channels' segmentation), Motion Capture tracking, 2D tracking, vertical and horizontal Planes detection, Image detection, 3D Object detection and 3D Object scanning. With People Occlusion tool your AR content realistically passes behind and in front of people in the real world, making AR experiences even more immersive. Also, Realistic reflections, that use machine learning algorithms and Face-based AR experience that allows to track up to 3 faces at a time are available for you now.



Using iBeacons along with ARKit, you assist an iBeacon-aware application to know what room it’s in, and show the correct 3D/2D content chosen for that room. Working with ARKit you should intensively exploit ARAnchor class and all its subclasses, the same way you’ve been using it in ARCore.


Pay particular attention to RealityKit's satellite – Reality Composer application that's now a part of Xcode 11. This brand-new app helps you build 3D scenes for AR. Scenes built in Reality Composer can be packed with dynamics, simple animations and PBR materials. Reality Composer can be installed on iPhone and iPad too.

For creating AR apps built on the latest version of ARKit 3.0 (including all the innovations it supports), you need macOS 10.15 Catalina, Xcode 11 and device running iOS 13 or iPadOS 13. A sad news is – all ARKit 3.0 top features are restricted to devices powered by Apple A12 and A13 processors. Also ARKit 3.0 is a worthy candidate to marry Metal framework for GPU acceleration. And don’t forget: ARKit tightly integrates with Unity and Unreal. At the moment the most powerful and energy efficient chipsets for AR experience are A13 Bionic (7nm) and A12 Bionic (7nm).

ARKit price: FREE.

|-----------------------------------|-----------------------------------|
|             ARKit PROS            |            ARKit CONS             | 
|-----------------------------------|-----------------------------------|
| //  Stable 6DoF World Tracking    |  No angled-surface detection      |
|-----------------------------------|-----------------------------------|
| //  Collaborative Sessions        |  ARKit 3 restrictions (A12, A13)  |
|-----------------------------------|-----------------------------------|
| //  World Maps, iBeacon-awareness |  No macOS Simulator for ARKit     |
|-----------------------------------|-----------------------------------|
| //  4 rendering technologies      |  No external camera support       |
|-----------------------------------|-----------------------------------|
| //  Rich developer documentation  |                                   |
|-----------------------------------|-----------------------------------|

Here's ARKit code's snippet written in Swift:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

    guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
    let planeNode = tableTop(planeAnchor)
    node.addChildNode(planeNode)
}

func tableTop(_ anchor: ARPlaneAnchor) -> SCNNode {

    let x = CGFloat(anchor.extent.x)
    let z = CGFloat(anchor.extent.z)

    let tableNode = SCNNode()
    tableNode.geometry = SCNPlane(width: x, height: z)
    tableNode.position = SCNVector3(anchor.center.x, 0, anchor.center.z)
    return tableNode
} 

Vuforia 8.5

PTC Vuforia Engine boasts approximately the same main capabilities that you can find in the latest versions of ARKit and ARCore as well as its own new features, such as Model Targets with Deep Learning, VISLAM for markerless AR experience and External Camera support for iOS, and new experimental APIs for ARCore and ARKit. The main advantage of Vuforia over ARKit and ARCore that it has a wider list of supported devices.

Vuforia has a standalone version and a version baked directly into Unity. It has the following functionality: Advanced Model Targets 360 (recognition powered by AI), Model Targets with Deep Learning (allow to instantly recognize objects by shape using pre-existing 3D models and deep learning algorithms), Image Targets (the easiest way to put AR content on flat objects), Multi Targets (for objects with flat surfaces and multiple sides), Cylinder Targets (for placing AR content on objects with cylindrical shapes), Ground Plane (as a part of Smart Terrain, this feature enables digital content to be placed on floors and tabletop surfaces), VuMarks (allow to identify and add content to series of objects), Object Targets (for scanning an object), Static and Adaptive Modes (for stationary and moving objects) and, of course, Fusion.

Vuforia Fusion is a capability designed to solve the problem of fragmentation in AR enabling technologies such as cameras, sensors, chipsets, and software frameworks like ARKit. With Vuforia Fusion, your application will automatically provide the best experience possible with no extra work required on your end.

New API allows for a Static or Adaptive mode. When the real-world model remains stationary, like a large industrial machine, implementing the Static API will use significantly less processing power. This enables a longer lasting and higher performance experience for those models. For objects that won’t be stationary the Adaptive API allows for a continued robust experience.

The External Camera feature is a part of the Vuforia Engine Driver Framework. External Camera provides a new perspective on what’s possible with Augmented Reality. It allows Vuforia Engine to access external video sources beyond the camera equipped in phones and tablets. By using an independent camera, developers can create an AR experience that offers a first-person view from toys, robots or industrial tools.

Occlusion Management is one of the key features for building a realistic AR experience. When you're using Occlusion Management, Vuforia Engine detects and tracks targets, even when they’re partially hidden behind everyday barriers, like your hand. Special occlusion handling allows apps to display graphics as if they appear inside physical objects.

Vuforia supports Metal acceleration for iOS devices. Also you can use Vuforia Samples for your projects. For example: the Vuforia Core Samples library includes various scenes using Vuforia features, including a pre-configured Object Recognition scene that you can use as a reference and starting point for Object Recognition application.

Vuforia SDK Price (there are four options): Free (you need to register for a free Development License Key), $499 per Classic license (for apps built for companies with revenue under $10 Million/year), $99/month per Cloud license and PTC also provides a Pro license with personal price (with no revenue restriction).

CONCLUSION :

There's no significant limitations for developing in Vuforia compared to ARCore and ARKit. Vuforia is a great product and it supports iPhone 5/6 series as well as some Android models that ARCore doesn't support. But in my opinion, ARKit itself has a much greater measurement accuracy without any need for calibration, and ARKit 3.0 personally has a bunch of useful features that eliminate errors accumulated over time. So Vuforia's measurement accuracy depends on what platform you're developing for.

Pay particular attention: Vuforia Chalk application was built on Apple ARKit.




回答2:


Excellent info. However would like to add few points based on the experience in using ARCore and ARkit. With respect to mapping, ARCore has the ability to manage larger maps compared to ARkit. ARcore tracks more feature point compared to ARkit. Another point is ARKit differentiates the horizontal and vertical detection of surfaces better than ARcore.



来源:https://stackoverflow.com/questions/50811770/are-there-any-limitations-in-vuforia-compared-to-arcore-and-arkit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!