firebase-mlkit

How to know Tensorflow Lite model's input/output feature info?

一笑奈何 提交于 2019-12-19 03:32:26
问题 I'm mobile developer. And I want to use various Tensorflow Lite models( .tflite ) with MLKit. But there are some issues, I have no idea of how to know .tflite model's input/output feature info(these will be parameters for setup). Is there any way to know that? Sorry for bad English and thanks. Update(18.06.13.): I found this site https://lutzroeder.github.io/Netron/. This visualize graph based on your uploaded model(like .mlmode or .tflite etc.) and find input/output form. Here is example

Can't depend on Firebase ML Kit

别来无恙 提交于 2019-12-13 20:09:35
问题 I'm trying to add firebase's ML Kit as a dependency in my android application: dependencies { implementation("com.microblink:blinkinput:${rootProject.ext.blinkInputVersion}@aar") { transitive = true } implementation 'com.google.firebase:firebase-analytics:17.2.1' implementation 'com.google.android.gms:play-services-ads:18.3.0' implementation 'com.google.android:flexbox:1.1.0' implementation 'com.android.billingclient:billing:2.0.3' implementation "com.heapanalytics.android:heap-android-client

How to combine all Firebase ML kit APIs in one app?

末鹿安然 提交于 2019-12-13 09:32:51
问题 I want that if one image is selected it detects Label, text and faces in single image at one time only. 回答1: Just call all of the functions on your image file at once, then combine the results using something like zip in RXJava. Alternatively, you could nest the results (e.g. call FirebaseVision.getInstance().onDeviceTextRecognizer.processImage(image) inside the onSuccessListener of another function), although this will take much longer to complete all. If you provide code of your existing

android app crashes when adding firebase ML vision dependencies with firebase database dependencies

岁酱吖の 提交于 2019-12-13 08:36:09
问题 I have a project which uses firebase database,auth and other dependencies but whenever i try to add firebase ml vision or google play service vision dependencie the app crashes although gradle build was successful. edit : thats what logcat look like 06-27 02:36:37.757 17719-17719/com.example.nikhiljindal.testing_start I/zygote: at com.google.firebase.FirebaseApp com.google.firebase.FirebaseApp.initializeApp(android.content.Context) (SourceFile:281) at boolean com.google.firebase.provider

Firebase ML Kit how to run Face Detection in Background as service?

回眸只為那壹抹淺笑 提交于 2019-12-13 04:17:07
问题 In my app before i have used google gms vision for face detection . I have used the https://developers.google.com/android/reference/com/google/android/gms/vision/CameraSource.Builder to create the camerasource in my background service class. But in Firebase ML Kit no common package is added for camerasource builder and preview. Is there any possibilities to use CameraSourcePreview and CameraSource Builder in Background service class. I have tired by adding the CameraSourcePreview layout at

Google Machine Learning Kit, Recognize Text in Images with ML Kit on Android error [closed]

我是研究僧i 提交于 2019-12-11 19:59:52
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed last year . I tried to install "Recognize Text in Images with ML Kit on Android" then I'm running this app, I got this error: com.google.firebase.codelab.mlkit W/System.err: com.google.firebase.ml.common.FirebaseMLException: Waiting for the text recognition model to be downloaded I waited for 4 or 5 hours with no response. It

Getting frameworks(GoogleMobileVision/FirebaseMLCommon) issue when integrate firebase SDK without using Cocoa pods

喜欢而已 提交于 2019-12-11 18:26:02
问题 Getting below error when adding firebase SDK in app: Undefined symbols for architecture x86_64: "_OBJC_CLASS_$_LAContext", referenced from: objc-class-ref in FirebaseMLCommon(MDMPasscodeCache_ac345e06741a76a3aefe61adde149175.o) objc-class-ref in GoogleMobileVision ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) Xcode version: 10.1 Firebase SDK version: 5.20.2 Firebase Component: ML Kit (text recognition) Component

Getting framework(MLVision/MLVisionTextModel) not found issue when Integration firebase SDKs without using Cocoa pods

蓝咒 提交于 2019-12-11 17:24:47
问题 I am working on integration firebase ML vision kit without using the cocoa pods:
 I am getting below error when adding SDK frameworks(MLVision and MLVisionTextModel) in app : ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) Point 1: 
My steps as I followed in app to integrate these SDK's framework : A. Downloaded firebase SDK 5.20.2.
 B. Followed the steps as defined inside the Readme.md file.
 First, I added all

Error: Attribute application@appComponentFactory value=(androidx.core.app.CoreComponentFactory) from [androidx.core:core:1.0.0]

…衆ロ難τιáo~ 提交于 2019-12-11 06:47:37
问题 I am working on the firebase-ml. Earlier, I am using the 16 version of it and its working fine with my project. But after some requirements, I need to upgrade it to 21.0.0, and then I am facing the Error: Attribute application@appComponentFactory value=(androidx.core.app.CoreComponentFactory) from [androidx.core:core:1.0.0] AndroidManifest.xml:22:18-86 is also present at [com.android.support:support-compat:28.0.0] AndroidManifest.xml:22:18-91 value=(android.support.v4.app.CoreComponentFactory

Firebase MLKit Text Recognition Error

℡╲_俬逩灬. 提交于 2019-12-05 01:06:56
问题 I'm trying to OCR my image using Firebase MLKit but it fails and return with error Text detection failed with error: Failed to run text detector because self is nil. /// Detects texts on the specified image and draws a frame for them. func detectTexts() { let image = #imageLiteral(resourceName: "testocr") // Create a text detector. let textDetector = vision.textDetector() // Check console for errors. // Initialize a VisionImage with a UIImage. let visionImage = VisionImage(image: image)