Caffe: What can I do if only a small batch fits into memory?

半腔热情 提交于 2019-11-26 09:53:33

问题


I am trying to train a very large model. Therefore, I can only fit a very small batch size into GPU memory. Working with small batch sizes results with very noisy gradient estimations.
What can I do to avoid this problem?


回答1:


You can change the iter_size in the solver parameters. Caffe accumulates gradients over iter_size x batch_size instances in each stochastic gradient descent step. So increasing iter_size can also get more stable gradient when you cannot use large batch_size due to the limited memory.




回答2:


As stated in this post, the batch size is not a problem in theory (the efficiency of stochastic gradient descent has been proven with a batch of size 1). Make sure you implement your batch correctly (the samples should be randomly picked over your data).



来源:https://stackoverflow.com/questions/36526959/caffe-what-can-i-do-if-only-a-small-batch-fits-into-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!