Traning Dlib object detector with >450K instances

折月煮酒 提交于 2019-12-10 21:27:42

问题


Is dlib capable of large scale datasets for training object detector. I have >450K face images to train a face detector. Is it possible to use Dlib or I need to direct to another alternative?


回答1:


How much data you can use is a function of how much RAM is in your computer. So maybe you can train on that many depending on how large each image is and how much RAM you have.

But more importantly, you are probably asking about the HOG+SVM detector in dlib. And for training a face detector, 450K faces is far beyond the point of diminishing returns for HOG+SVM. For example, the frontal face detector that comes with dlib, which is very accurate, is trained on only a small 62MB dataset (this one http://dlib.net/files/data/dlib_face_detector_training_data.tar.gz). Training this kind of detector with more than a few thousand images is not going to get you any additional accuracy.

Now if you have a whole lot of pose variability in your data then HOG+SVM isn't going to be able to capture that. The best thing to do in that case is to train multiple detectors, one for each pose. You can automatically cluster your dataset into different poses using the --cluster option of dlib's imglab tool.



来源:https://stackoverflow.com/questions/38349615/traning-dlib-object-detector-with-450k-instances

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!