How can I detect yawn using Open CV

人盡茶涼 提交于 2019-12-03 06:13:53

问题


I am developing an application for iOS that needs to detect when the user yawns.

What I did is include Open CV and find faces using a haar cascade, and then find mouth inside the faces (too using a haarcascade).

The trouble that I have is that I believed that it would be so easy to detect a yawn like doing something like (face.y - mouth.y) < something = yawn.

But the problem that I have is that the rects for face and mouth are "inestables", I mean every time that the loop runs X and Y values for the rects of face and mouth are (obviusly) not the same.

Is there any "open mouth" haar cascade that I can use, or how can I know when the user opens the mouth?


回答1:


As a general, Support Vector Machine (SVM) is used for facial expression recognition like anger,smile, surprise etc where still active development takes place. Googling give you a lot of papers on this topic, (even one of my class mate did this as his final year project). For that, at first you need to train the SVM and to do that, you need sample images of yawning and normal faces.

Yawning is almost similar to surprise, where mouth open on both cases. I recommend you to check out page 3 of below paper : Real Time Facial Expression Recognition in Video using Support Vector Machines (If you can't access the link, google by paper name)

The paper (even my classmate) used displacement vector of facial features. For this, you find some feature points on the face. For example, in the paper, they have used eye pupil,extreme points of lids, nose tip, extreme points of mouth region (lips) etc. Then they continuously track the location of the features and find euclidean distance between them. They are used to train the SVM.

Check out below two papers :

Feature Points Extraction from Faces

Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boosted Classifiers

Look image below what i mean by feature points on face:

In your case, i think you are implementing it in iPhone in real time. So may be you can avoid feature points at eyes (although not an good idea, since when you yawn, eyes become small in size). But compared to it, feature points at lips shows more variations and predominant. So, implementing on lip alone may save time. (Well, it all depends on you).

Lip Segmentation : It is already discussed in SOF and check out this question : OpenCV Lip Segmentation

And finally, i am sure you can find a lot of details on googling, because it is an active development area, and a lot of papers are out there.

Another Option :

Another option in this region, which i have heard several times, is Active Appearance Model. But I don't know anything about it. Google it yourself.




回答2:


OpenCV also has face recognition / detection capabilities (see the examples that come with the openCV SDK). I think these would be a better place to look, as haar cascade doesn't really analyze the facial expressions the way you need it to. Try running the examples and see for yourself - you'll get real time data about the detected eyes / mouth and so.

Good luck



来源:https://stackoverflow.com/questions/10966772/how-can-i-detect-yawn-using-open-cv

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!