I\'m wondering what if I have a layer generating a bottom blob that is further consumed by two subsequent layers, both of which will generate some gradients to fill bottom.diff
Using more than a single loss layer is not out-of-the-ordinary, see GoogLeNet for example: It has three loss layers "pushing" gradients at different depths of the net.
In caffe, each loss layer has a associated loss_weight: how this particular component contribute to the loss function of the net. Thus, if your net has two loss layers, Loss1
and Loss1
the overall loss of your net is
Loss = loss_weight1*Loss1 + loss_weight2*Loss2
The backpropagation uses the chain rule to propagate the gradient of Loss
(the overall loss) through all the layers in the net. The chain rule breaks down the derivation of Loss
into partial derivatives, i.e., the derivatives of each layer, the overall effect is obtained by propagating the gradients through the partial derivatives. That is, by using top.diff
and the layer's backward()
function to compute bottom.diff
one takes into account not only the layer's derivative, but also the effect of ALL higher layers expressed in top.diff
.
TL;DR
You can have multiple loss layers. Caffe (as well as any other decent deep learning framework) handles it seamlessly for you.