How to adaptively add and use face images collected while authentication to improve performance of face authentication?

时间秒杀一切 提交于 2019-11-29 05:19:30

To make your classifier robust you need to use condition independent features. For example, you cannot use face color since it depends on lighting conditions and state of a person itself. However, you can use distance between eyes since it is independent of any changes.

I would suggest building some model of such independent features and retrain classifier each time person starts authentication session. Best model I can think of is Active Appearance Model (one of implementations).

I would recommend that you give SOM(self-organizing maps) a close look. I think it contains the solutions to all the problems and constraints you have mentioned.

You can employ it for the single image per person problem. Also, using the multiple SOM-face strategy, you can adapt it for cases when additional images are available for training. Whats pretty neat about the whole concept is that when a new face is encountered, only the new one rather than the whole original database is needed to be re-learned.

A few links which you might find helpful along the way:

http://en.wikipedia.org/wiki/Self-organizing_map (wiki)

http://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/tnn05.pdf (An interesting research paper which demonstrates the above mentioned technique)

Good Luck

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!