Image mean subtraction vs BatchNormalization - Caffe

China☆狼群 提交于 2019-12-11 04:18:32

问题


i have a question regarding Image preprocessing in Caffe. When i use the BatchNormalization Layer in my caffemodel, do i still need the preprocessing step "image mean subtraction" on all my trainings Images before Training Phase starts? Or is this done in the BatchNormalization Layer ?

Thank you very much =)


回答1:


Image mean subtraction does something different than BatchNormalization and is used for a different purpose.

BatchNormalization normalizes a batch and not every single image and is more used to keep the data distributed well and to combat high activations and therefore overfitting. Afterwards not every image has the 0 mean, but the combination of the batch has 0 mean. It would only be the same if the batchsize is 1.

Image mean subtraction is mostly used to combat illumination changes in the input space. http://ufldl.stanford.edu/wiki/index.php/Data_Preprocessing

Depending on your specific example you may get good results by applying batch normalization after the input instead of using per mean subtraction, but you will need to test this.



来源:https://stackoverflow.com/questions/41222815/image-mean-subtraction-vs-batchnormalization-caffe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!