问题
I am trying to train a very large model. Therefore, I can only fit a very small batch size into GPU memory. Working with small batch sizes results with very noisy gradient estimations.
What can I do to avoid this problem?
回答1:
You can change the iter_size
in the solver parameters.
Caffe accumulates gradients over iter_size
x batch_size
instances in each stochastic gradient descent step.
So increasing iter_size
can also get more stable gradient when you cannot use large batch_size due to the limited memory.
回答2:
As stated in this post, the batch size is not a problem in theory (the efficiency of stochastic gradient descent has been proven with a batch of size 1). Make sure you implement your batch correctly (the samples should be randomly picked over your data).
来源:https://stackoverflow.com/questions/36526959/caffe-what-can-i-do-if-only-a-small-batch-fits-into-memory