face-detection

Android: How to draw on a Surfaceview which already is displaying a Camara Preview

孤街浪徒 提交于 2019-12-10 10:27:52
问题 I am trying to program an application which has to display on the mobile phone's screen what is being filmed by the front camera [The application is not recording/saving anything in the memory of the phone]. Also in case a face is filmed (and detected), it has to appear sourrended by a rectangle. To do so I'm using: A Surfaceview to display what is being filmed by the front camera. A FaceDetectionListener to detect faces in the camera input. So far the application displays properly what is

OpenCV detect face landmarks (ear-chin-ear line)

爱⌒轻易说出口 提交于 2019-12-10 09:49:07
问题 I am looking to an opencv function (in python) detecting the line left ear - chin - right ear (that looks like a parabol) on human faces. Is there any kind of haarcascade doing this job? I already know the frontal face or the eyes haarcascades but I am looking for something more precise. 回答1: what you are looking for is called face landmark detection . You can try DLIB . DLIB is written in C++ but it also have a python wrapper .Install Instructions Now using DLib you can achieve this Code

Augmented Faces API – How facial landmarks generated?

守給你的承諾、 提交于 2019-12-10 03:59:54
问题 I'm an IT student, and would like to know (understand) more about the Augmented Faces API in ARCore. I just saw the ARCore V1.7 release, and the new Augmented Faces API. I get the enormous potential of this API. But I didn't see any questions or articles on this subject. So I'm questioning myself, and here are some assumptions / questions which come to my mind about this release. Assumption ARCore team are using (Like Instagram and Snapchat) machine learning, to generate landmarks all over

Real time face detection with Camera on swift 3

那年仲夏 提交于 2019-12-09 07:19:44
问题 How can I do face detection in realtime just as "Camera" does? like white round shape around and over the face. I use AVCapturSession . I found that the image I saved for facial detection. Below I have attached my current code. it only captures image when I press the button and save it into the photo gallery. some please help me to create real-time round shape over according to the person's face! code class CameraFaceRecongnitionVC: UIViewController { @IBOutlet weak var imgOverlay:

iOS face detector orientation and setting of CIImage orientation

好久不见. 提交于 2019-12-09 07:00:14
问题 EDIT found this code that helped with front camera images http://blog.logichigh.com/2008/06/05/uiimage-fix/ Hope others have had a similar issue and can help me out. Haven't found a solution yet. (It may seem a bit long but just a bunch of helper code) I'm using the ios face detector on images aquired from the camera (front and back) as well as images from the gallery (I'm using the UIImagePicker - for both image capture by camera and image selection from the gallery - not using avfoundation

how to add part of Face A to Part of face B, most importantly matching color tones

孤者浪人 提交于 2019-12-08 11:31:29
问题 i have been successful in detecting faces, cropping and pasting in new imageHow ever i am looking a way to mix color tone of face A to Face B. So, if you look and face b, cropped image color tone not matching to face B face color. how can i do this to max % matching. It would be good to have exact solution, however links or approach will be appreciated. I can provide code for cropping and pasting. Thanks 回答1: Since it seemed to be helpful for you, fading the edges of the images as described

Viola Jones - How to scale a weak classifier (feature)

℡╲_俬逩灬. 提交于 2019-12-08 06:49:55
问题 Once you've trained a Strong classifier for the Viola Jones face detector, you are supposed to run a 24x24 subwindow over your testing images. Once you've moved it over the screen, you are supposed to scale it (the paper recommends x1.5 each time). My question is, the point of this is that features are easily calculated at different scales. However, how are you supposed to scale the feature? You just multiply the width/height by the scale factor? Or do you have to move it as well? (scale

Capturing camera frame in android after face detection

我与影子孤独终老i 提交于 2019-12-07 02:49:17
问题 I am working with face detection in Android and I want achieve the following: 1. Use face detection listener in Android for detecting faces on camera frame. 2. If a face is detected on the camera frame, then extract the face and save it to external storage. After surfing through existing questions, I have found that there is no direct way to convert detected face to bitmap and store it on the disk. So now I want to capture and save the entire camera frame in which the face has been detected

Blending using GPUImagePoissonBlendFilter

陌路散爱 提交于 2019-12-06 23:33:30
问题 Im trying to use GPUImagePoissonBlendFilter of the GPUImage framework to blend two faces in my face blending application. Here is my code. - (void)applyPoissonBlendToImage:(UIImage *) rearFace withImage:(UIImage *) frontFace { GPUImagePicture* picture1 = [[GPUImagePicture alloc] initWithImage:rearFace]; GPUImagePicture* picture2 = [[GPUImagePicture alloc] initWithImage:frontFace]; GPUImagePoissonBlendFilter * poissonFilter = [[GPUImagePoissonBlendFilter alloc] init]; poissonFilter.mix = .7;

Can someone explain about detectMultiScale in openCV

人走茶凉 提交于 2019-12-06 16:50:32
I've been trying objectDetection in openCV.. Followed a few steps.. Resizing it to 64x64 resolution Changing it to gray scale Fetching XML for object detection Drawing rectangle fringing the pattern Yet, I couldn't achieve it.. Here's my code : #include<iostream> #include "cv.h" #include "highgui.h" #include<vector> using namespace cv; using namespace std; int main() { IplImage* img; img = cvLoadImage( "hindi3.jpg" ); vector<cv::Rect> objects; // ***Resize image to 64x64 resolution*** IplImage *resizeImage = cvCreateImage(cvSize(64,64),8,3); cvResize(img,resizeImage,CV_INTER_LINEAR);