问题
I am working on a project related to recognising emotions(sad,happy,anger etc.) from a face. I am using facial landmark detector from dlib library which detect 68 interest points. For the same emotion, these interest points can vary depending on the orientation of the face, size of eyes, lips etc on different faces.
I would like to normalise these interest points that makes them invariant to orientation of the face, size of eyes, lips etc. What are the techniques can I use to do so. I would then like to train the data with SVM.
回答1:
Dlib already has normalization code that is used in http://dlib.net/face_landmark_detection_ex.cpp.html sample by calling http://dlib.net/imaging.html#extract_image_chips function.
You will need to use part of its code to get normalized landmarks - they will still have enough information to detect emotions, but face will be rotated and scaled:
...
// 1. detect faces
std::vector<rectangle> dets = detector(image);
for (rectangle d : dets)
{
// 2. get landmarks
full_object_detection shape = sp(image, d);
// 3. chip details (normalization params) for normalized image with normalized size of 100 pixels
chip_details chip = get_face_chip_details(shape, 100);
// 4. get normalized landmarks
full_object_detection normalized = map_det_to_chip(shape, chip);
// now you can used normalized shape in your classifier
}
After you got the normalized shape - its on you how to train the classifier. May be it will be enough to use landmarks "as is", may be you will need to get most important points and calculate distances between them and train on distances data
来源:https://stackoverflow.com/questions/39537536/normalization-of-facial-landmark-points-in-image-processing