Caffe: what will happen if two layers backprop gradients to the same bottom blob?

后端 未结 1 1758
生来不讨喜
生来不讨喜 2021-01-22 17:18

I\'m wondering what if I have a layer generating a bottom blob that is further consumed by two subsequent layers, both of which will generate some gradients to fill bottom.diff

相关标签:
1条回答
  • 2021-01-22 17:27

    Using more than a single loss layer is not out-of-the-ordinary, see GoogLeNet for example: It has three loss layers "pushing" gradients at different depths of the net.
    In caffe, each loss layer has a associated loss_weight: how this particular component contribute to the loss function of the net. Thus, if your net has two loss layers, Loss1 and Loss1 the overall loss of your net is

    Loss = loss_weight1*Loss1 + loss_weight2*Loss2
    

    The backpropagation uses the chain rule to propagate the gradient of Loss (the overall loss) through all the layers in the net. The chain rule breaks down the derivation of Loss into partial derivatives, i.e., the derivatives of each layer, the overall effect is obtained by propagating the gradients through the partial derivatives. That is, by using top.diff and the layer's backward() function to compute bottom.diff one takes into account not only the layer's derivative, but also the effect of ALL higher layers expressed in top.diff.

    TL;DR
    You can have multiple loss layers. Caffe (as well as any other decent deep learning framework) handles it seamlessly for you.

    0 讨论(0)
提交回复
热议问题