google-vision

Add 2D or 3D Face Filters like MSQRD/SnapChat Using Google Vision API for iOS

狂风中的少年 提交于 2019-11-28 17:14:40
问题 Here's some research I have done so far: - I have used Google Vision API to detect various face landmarks. Here's the reference: https://developers.google.com/vision/introduction Here's the link to Sample Code to get the facial landmarks. It uses the same Google Vision API. Here's the reference link: https://github.com/googlesamples/ios-vision I have gone through the various blogs on internet which says MSQRD based on the Google's cloud vision. Here's the link to it: https://medium.com/

FaceDetectorHandle﹕ Native face detector not yet available. Reverting to no-op detection

六眼飞鱼酱① 提交于 2019-11-28 05:29:59
问题 I'm trying to incorporate the Google Play Services 7.8 Face API in my app, but every time I try to detect faces it gives me the error: FaceDetectorHandle﹕ Native face detector not yet available. Reverting to no-op detection According to the bottom of the post Android-er Face Detection, this problem occurs on devices running Lollipop or later. Specifically, they said it works on a "RedMi 2, running Android 4.4.4 with Google Play services version 7.8.99 installed, but not on a Nexus 7 2012

Google Vision API text detection strange behaviour - Javascript

こ雲淡風輕ζ 提交于 2019-11-28 05:07:02
问题 Recently something about the Google Vision API changed. I am using it to recognize text on receipts. All good until now. Suddenly, the API started to respond differently to my requests. I sent the same picture to the API today, and I got a different response (from the past). I ensured nothing was changed in my code, so this is not the culprit. Another strange thing is that, when I upload the image to https://cloud.google.com/vision/ in the response, under textAnnotations, I get an array of

Google Vision barcode library not found

女生的网名这么多〃 提交于 2019-11-27 22:56:24
I'm trying to use the new feature in Google Play Services (Vision) to add QR code scanning to my application. But when I run my app I get this: I/Vision﹕ Supported ABIS: [armeabi-v7a, armeabi] D/Vision﹕ Library not found: /data/data/com.google.android.gms/files/com.google.android.gms.vision/barcode/libs/armeabi-v7a/libbarhopper.so I/Vision﹕ Requesting barcode detector download. I have declared barcode dependency as per tutorial: <meta-data android:name="com.google.android.gms.vision.DEPENDENCIES" android:value="barcode" /> I tried reinstalling the app and restarting the phone, nothing helps.

How to capture barcode values using the new Barcode API in Google Play Services?

雨燕双飞 提交于 2019-11-27 18:00:59
I've been playing with the sample code from the new Google Barcode API. It overlays a box and the barcode value over the live camera feed of a barcode. (Also faces) I can't tell how to return a barcode value to my app. A) How to tell when a detection event has occurred and B) how to access the ravValue for use in other parts of my app. Can anyone help with this? https://developers.google.com/vision/multi-tracker-tutorial https://github.com/googlesamples/android-vision UPDATE: Building on @pm0733464's answer, I added a callback interface (called onFound) to the Tracker class that I could access

Google Vision API Samples: Get the CameraSource to Focus

為{幸葍}努か 提交于 2019-11-27 15:05:11
I have checkout out the latest Google Vision APIs from here: https://github.com/googlesamples/android-vision And I am running it on a LG G2 device with KitKat. The only change I have made is to the minSdkVerion in the Gradle file: ... defaultConfig { applicationId "com.google.android.gms.samples.vision.face.multitracker" minSdkVersion 19 ... However it does not focus. How do I make it focus? I modified the CameraSourcePreview (....) constructor to be as follows: public CameraSourcePreview(Context context, AttributeSet attrs) { super(context, attrs); mContext = context; mStartRequested = false;

Media Recorder with Google Vision API

会有一股神秘感。 提交于 2019-11-27 13:28:08
I am using the FaceTracker sample from the Android vision API. However, I am experiencing difficulty in recording videos while the overlays are drawn on them. One way is to store bitmaps as images and process them using FFmpeg or Xuggler to merge them as videos, but I am wondering if there is a better solution to this problem if we can record video at runtime as the preview is projected. Update 1: I updated the following class with media recorder, but the recording is still not working. It is throwing the following error when I call triggerRecording() function: MediaRecorder: start called in

Google Vision barcode library not found

醉酒当歌 提交于 2019-11-26 21:14:12
问题 I'm trying to use the new feature in Google Play Services (Vision) to add QR code scanning to my application. But when I run my app I get this: I/Vision﹕ Supported ABIS: [armeabi-v7a, armeabi] D/Vision﹕ Library not found: /data/data/com.google.android.gms/files/com.google.android.gms.vision/barcode/libs/armeabi-v7a/libbarhopper.so I/Vision﹕ Requesting barcode detector download. I have declared barcode dependency as per tutorial: <meta-data android:name="com.google.android.gms.vision

Mobile Vision API - concatenate new detector object to continue frame processing

我只是一个虾纸丫 提交于 2019-11-26 18:54:02
I want to use the new face detection feature that the vision API provides along with additional frame processing in an application. For this, I need to have access to the camera frame that was processed by the face detector, and concatenate a processor using the face detected data. As I see in the sample, the CameraSource abstracts the detection and camera access, and I can't have access to the frame being processed. Are there examples of how to get the camera frame in this API, or, maybe, create and concatenate a detector that receives it? Is that possible at least? Thanks, Lucio pm0733464

Mobile Vision API - concatenate new detector object to continue frame processing

五迷三道 提交于 2019-11-26 07:39:06
问题 I want to use the new face detection feature that the vision API provides along with additional frame processing in an application. For this, I need to have access to the camera frame that was processed by the face detector, and concatenate a processor using the face detected data. As I see in the sample, the CameraSource abstracts the detection and camera access, and I can\'t have access to the frame being processed. Are there examples of how to get the camera frame in this API, or, maybe,