face-detection

New Android Face API limitations

你。 提交于 2019-12-04 17:09:21
I have been testing the new Face API realesed for android, and noticed even with "ACCURATE_MODE" enabled, it doesn't detect faces that old FaceDetector API used to detect, Also i would like to know the effect of Bitmap coding "RGB_565" vs "ARGB_888" in producing the results. Update: The issue was that the face detector is set to only detect faces that are at least 10% by default (as a performance optimization). The new Google Play Services 8.4 release supports setting this minimum face size lower, enabling smaller faces to be detected. See the setMinFaceSize method here: https://developers

open eye and closed eye in android by Android eye detection and tracking with OpenCV

限于喜欢 提交于 2019-12-04 16:36:01
i made application eye detecting by following this link link and it work how can i detect the eye is opened or closed ? is there library in android to detect closed or opened I've no idea whether there is any library for that, but using technique descirbed in article Eye-blink detection system for human–computer interaction by Aleksandra Królak and Paweł Strumiłło (you can download it here and here and here is some simplified version ) in my opinion is a good option. Generally this technique is quite simple: Find eye (or both eyes). Remember this part of image as a template. In next frame use

Auto-capture an image from a video in OpenCV using python

对着背影说爱祢 提交于 2019-12-04 13:49:11
I am trying developing a code which functions as the self-timer camera. The video would be seen in the window and the person's face and eyes would be continuously detected and once the user selects a specific time, the frame at that point of time is captured. I am able to capture the frame after a certain time using sleep function in time module but the video frame seems to freeze. Is there any solution such that I can continue to see the video and the video capture takes place after some delay automatically. I am using the code- import numpy as np import cv2 import time import cv2.cv as cv

Capture camera preview for using in OpenCV. Converting to RGB and Gray Mat's. Java. Android

我只是一个虾纸丫 提交于 2019-12-04 13:25:50
问题 I want to detect faces on camera previews. I saw this example in OpenCV samples: @Override protected Bitmap processFrame(VideoCapture capture) { capture.retrieve(mRgba, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA); capture.retrieve(mGray, Highgui.CV_CAP_ANDROID_GREY_FRAME); if (mCascade != null) { int height = mGray.rows(); int faceSize = Math.round(height * FdActivity.minFaceSize); List<Rect> faces = new LinkedList<Rect>(); mCascade.detectMultiScale(mGray, faces, 1.1, 2, 2 // TODO: objdetect.CV

Delphi Components for Face Identification and Tagging

时间秒杀一切 提交于 2019-12-04 13:13:12
问题 Are there any good components, free or commercial, available for Delphi (I use Delphi 2009) that will allow me to easily implement face detection and tagging of the faces in photos (i.e. graphics/images)? I need to do something similar to what Google Picasa's Web Albums can do, but from within my application. 回答1: Here is what you wanted http://delphimagic.blogspot.com/2011/08/reconocimiento-de-caras-con-delphi.html 回答2: Did you see the SDK's that come in the answer Face recognition Library.

Android camera2.params.face rectangle placement on canvas

丶灬走出姿态 提交于 2019-12-04 13:09:16
问题 I'm trying to implement face detection in my camera preview. I followed the Android reference pages to implement a custom camera preview in a TextureView , placed in a FrameLayout . Also in this FrameLayout is a SurfaceView with a clear background (overlapping the camera preview). My app draws the Rect that is recognized by the first CaptureResult.STATISTICS_FACES face's bounds dynamically to the SurfaceView 's canvas, every time the camera preview is updated (once per frame). My app assumes

OpenCV Lip Segmentation

微笑、不失礼 提交于 2019-12-04 12:33:42
问题 How do people usually extract the shape of the lips once the mouth region is found (in my case using haar cascade)? I tried color segmentation and edge/corner detection but they're very inaccurate for me. I need to find the two corners and the very upper and lower lip at the center. I've heard things about active appearance models but I'm having trouble understanding how to use this with python and I don't have enough context to figure out if this is even the conventional method for detecting

Using OpenCV detectMultiScale to find my face

不问归期 提交于 2019-12-04 12:24:02
问题 I'm pretty sure I have the general theme correct, but I'm not finding any faces. My code reads from c=cv2.VideoCapture(0) , i.e. the computer's videocamera. I then have the following set up to yield where the faces are. As you can see, I'm looping through different scaleFactors and minNeighbors but rects always comes back empty. I've also tried each of the four different haarcascade xml files included in the opencv/data/haarcascades package. Any tips? while(1): ret, frame = c.read() rects =

OpenCV / Python : multi-threading for live facial recognition

你。 提交于 2019-12-04 12:14:29
I'm using OpenCv and Dlib to execute facial recognition w/ landmarks, live from the webcam stream . The language is Python . It works fine on my macbook laptop, but I need it to run from a desktop computer 24/7. The computer is a PC Intel® Core™2 Quad CPU Q6600 @ 2.40GHz 32bit running Debian Jessie. The drop in performance is drastic : there is a 10 seconds delay due to processing ! I therefore looked into multi-threading to gain performance : I first tried the sample code by OpenCv, and the result is great! All four cores hit 100%, and the performance is much better. I then replaced the frame

Android Face Detection API - Stored video file

六眼飞鱼酱① 提交于 2019-12-04 12:08:02
I would like to perform face detection / tracking on a video file (e.g. an MP4 from the users gallery) using the Android Vision FaceDetector API. I can see many examples on using the CameraSource class to perform face tracking on the stream coming directly from the camera (e.g. on the android-vision github ), but nothing on video files. I tried looking at the source code for CameraSource through Android Studio, but it is obfuscated, and I couldn't see the original online. I image there are many commonalities between using the camera and using a file. Presumably I just play the video file on a