Different learning rate affect to batchnorm setting. Why?

天大地大妈咪最大 提交于 2019-12-10 15:57:32

问题


I am using BatchNorm layer. I know the meaning of setting use_global_stats that often set false for training and true for testing/deploy. This is my setting in the testing phase.

layer {
  name: "bnorm1"
  type: "BatchNorm"
  bottom: "conv1"
  top: "bnorm1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "scale1"
  type: "Scale"
  bottom: "bnorm1"
  top: "bnorm1"
  bias_term: true
  scale_param {
    filler {
      value: 1
    }    
    bias_filler {
      value: 0.0
    }
  }
}

In solver.prototxt, I used the Adam method. I found an interesting problem that happens in my case. If I choose base_lr: 1e-3, then I got a good performance when I set use_global_stats: false in the testing phase. However, if I chose base_lr: 1e-4, then I got a good performance when I set use_global_stats: true in the testing phase. It demonstrates that base_lr effects to the batchnorm setting (even I used Adam method)? Could you suggest any reason for that? Thanks all


回答1:


AFAIK learning rate does not directly affect the learned parameters of "BatchNorm" layer. Indeed, caffe forces lr_mult for all internal parameters of this layer to be zero regardless of base_lr or the type of the solver.
However, you might encounter a case where the adjacent layers converge to different points according to the base_lr you are using, and indirectly this causes the "BatchNorm" to behave differently.



来源:https://stackoverflow.com/questions/44242122/different-learning-rate-affect-to-batchnorm-setting-why

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!