firebase-mlkit

Get Video and Audio buffer separately while recording video using front camera

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 23:03:12
问题 I dug a lot on SO and some nice blog post But seems I am having unique requirement of reading Video and Audio buffer separately for further processing on it while recording going on. My use case is like When the user starts the Video recording, I need to continuously process the Video frame using ML-Face-Detection-Kit and also continuously process the Audio frame to make sure user is speaking out something and detect the noise level as well. For this, I think I need both Video and Audio in a

Firebase- ML Kit library fails to detect barcode in Samsung J5 device

喜你入骨 提交于 2019-12-03 22:20:00
I have followed https://firebase.google.com/docs/ml-kit/android/read-barcodes and done integration in my application. But scan doesn't work in Samsung J5 device. However It works fine in Samsung A5, Moto G4 and Moto G5. While checking with logcat I can see below exception. Exception: com.google.firebase.ml.common.FirebaseMLException: Waiting for the barcode detection model to be downloaded. Please wait. can anyone help in this? This can happen if the storage on the device is not sufficient or say the internet is not available at all (which seems unlikely given the question). Try the following

Firebase MLKit Text Recognition Error

可紊 提交于 2019-12-03 16:24:23
I'm trying to OCR my image using Firebase MLKit but it fails and return with error Text detection failed with error: Failed to run text detector because self is nil. /// Detects texts on the specified image and draws a frame for them. func detectTexts() { let image = #imageLiteral(resourceName: "testocr") // Create a text detector. let textDetector = vision.textDetector() // Check console for errors. // Initialize a VisionImage with a UIImage. let visionImage = VisionImage(image: image) textDetector.detect(in: visionImage) { (features, error) in guard error == nil, let features = features,

MLKit Text detection on iOS working for photos taken from Assets.xcassets, but not the same photo taken on camera/uploaded from camera roll

强颜欢笑 提交于 2019-12-02 01:11:35
I'm using Google's Text detection API from MLKit to detect text from images. It seems to work perfectly on screenshots but when I try to use it on images taken in the app (using AVFoundation) or on photos uploaded from camera roll it spits out a small number of seemingly random characters. This is my code for running the actual text detection: func runTextRecognition(with image: UIImage) { let visionImage = VisionImage(image: image) textRecognizer.process(visionImage) { features, error in self.processResult(from: features, error: error) } } func processResult(from text: VisionText?, error:

How to know Tensorflow Lite model's input/output feature info?

。_饼干妹妹 提交于 2019-12-01 00:15:58
I'm mobile developer. And I want to use various Tensorflow Lite models( .tflite ) with MLKit . But there are some issues, I have no idea of how to know .tflite model's input/output feature info(these will be parameters for setup). Is there any way to know that? Sorry for bad English and thanks. Update(18.06.13.): I found this site https://lutzroeder.github.io/Netron/ . This visualize graph based on your uploaded model(like .mlmode or .tflite etc.) and find input/output form. Here is example screenshot! https://lutzroeder.github.io/Netron example If you already have a tflite model that you did