face-detection

Is it possible to save a user's skeleton and facial data for recognition purposes?

孤者浪人 提交于 2019-12-06 04:07:49
问题 I would like to be able to keep track of people that enter and exit a premises. Basically when the user approaches the Kinect, it will store his/her facial and skeletal data. Then upon leaving, that data will be removed. For now I am only wondering if this is possible or not with the Microsoft SDK. I have seen videos/demos of the Kinect being able to track people but my goal is to identify them uniquely . Any information will be greatly appreciated. 回答1: Yes you can save skeleton and face

OpenCV detect face landmarks (ear-chin-ear line)

落花浮王杯 提交于 2019-12-05 20:38:19
I am looking to an opencv function (in python) detecting the line left ear - chin - right ear (that looks like a parabol) on human faces. Is there any kind of haarcascade doing this job? I already know the frontal face or the eyes haarcascades but I am looking for something more precise. what you are looking for is called face landmark detection . You can try DLIB . DLIB is written in C++ but it also have a python wrapper . Install Instructions Now using DLib you can achieve this Code import cv2 import dlib import numpy PREDICTOR_PATH = "/home/zed/dlib/files/shape_predictor_68_face_landmarks

cvPyrDown vs cvResize for face detection optimization

南楼画角 提交于 2019-12-05 15:41:38
I want to optimize my face detection algorithm by scaling down the image. What is the best way? should I use cvPyrDown (as I saw in one example and yielded poor results so far), cvResize or another function? If you only want to scale the image, use cvResize as Adrian Popovici suggested. cvPyrDown will apply a Gaussian blur to smooth the image, then by default it will down-sample the image by a factor of two by rejecting even columns and rows. This smoothing may be degrading your performance (I'm not sure how it affects the detection algorithm). Another possibility for the poor performance

Java and haarcascade face and mouth detection - mouth as the nose

匆匆过客 提交于 2019-12-05 12:34:53
问题 Today I begin to test the project which detects a smile in Java and OpenCv. To recognition face and mouth project used haarcascade_frontalface_alt and haarcascade_mcs_mouth But i don't understand why in some reasons project detect nose as a mouth. I have two methods: private ArrayList<Mat> detectMouth(String filename) { int i = 0; ArrayList<Mat> mouths = new ArrayList<Mat>(); // reading image in grayscale from the given path image = Highgui.imread(filename, Highgui.CV_LOAD_IMAGE_GRAYSCALE);

Capturing camera frame in android after face detection

风格不统一 提交于 2019-12-05 05:55:10
I am working with face detection in Android and I want achieve the following: 1. Use face detection listener in Android for detecting faces on camera frame. 2. If a face is detected on the camera frame, then extract the face and save it to external storage. After surfing through existing questions, I have found that there is no direct way to convert detected face to bitmap and store it on the disk. So now I want to capture and save the entire camera frame in which the face has been detected and I have not been able to do so. The current code structure is as follows: FaceDetectionListener

Augmented Faces API – How facial landmarks generated?

大憨熊 提交于 2019-12-05 04:28:12
I'm an IT student, and would like to know (understand) more about the Augmented Faces API in ARCore. I just saw the ARCore V1.7 release , and the new Augmented Faces API . I get the enormous potential of this API. But I didn't see any questions or articles on this subject. So I'm questioning myself, and here are some assumptions / questions which come to my mind about this release. Assumption ARCore team are using (Like Instagram and Snapchat) machine learning, to generate landmarks all over the face. Probably HOG Face Detection .. Questions How does ARCore generate 468 points all over the

Facial feature detection with OpenCV with eyes and mouth corners points

家住魔仙堡 提交于 2019-12-05 02:42:53
问题 I'm working on a face feature detection project and I do detect the eyes, nose and mouth using OpenCv withHaarcascade xml files. But, I want to have the eyes and mouth corners points and the nose center. The goal is using it to predict emotions. I found this link that shows how it works, and I need to get to this result using JAVA. Could any one help me? Thanks in advance. http://cmp.felk.cvut.cz/~uricamic/flandmark/ in this part we receve the face image and we drawRect on the face: public

How to do realtime face detection?

浪子不回头ぞ 提交于 2019-12-04 19:42:12
how can I realize realtime face detection, when I use iPhone camera to take picture? just like the example: http://www.morethantechnical.com/2009/08/09/near-realtime-face-detection-on-the-iphone-w-opencv-port-wcodevideo/ (this example don't provide the .xcodeproj, so I can't compile .cpp file) another example: http://blog.beetlebugsoftware.com/post/104154581/face-detection-iphone-source (can't be compiled) do you have any solution? please give a hand! Wait for iOS 5 : Create amazing effects in your camera and image editing apps with Core Image. Core Image is a hardware-accelerated framework

Take photo when face detected

家住魔仙堡 提交于 2019-12-04 19:28:51
I have the following code and i want to automatic take only one photo when a face is detected. I have achieve to automatic take photo but it takes many photos without time to process them because it continuously detect the face. How can i make it to search every x minutes to find a face or every x minutes to take photo? Thank you in advance. FaceDetectionListener faceDetectionListener = new FaceDetectionListener(){ @Override public void onFaceDetection(Face[] faces, Camera camera) { if (faces.length == 0){ prompt.setText(" No Face Detected! "); }else{ //prompt.setText(String.valueOf(faces

Transform an Image using CIFaceFeature in iOS

拥有回忆 提交于 2019-12-04 19:24:35
I use CIDetector and CIFaceFeature to detect face on the front facing camera. Also trying to place a hat on the head. The hat is placing fine when the head is straight. If I tilt my head the hat goes small and goes away from head. Code use to add the hat, self.hatImgView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:selectedImageName]]; self.hatImgView.contentMode = UIViewContentModeScaleAspectFit; [self.previewView addSubview:self.hatImgView]; Detecting the face and movement, - (void)detectedFaceController:(DetectFace *)controller features:(NSArray *)featuresArray forVideoBox: