Layers within a layer in Caffe
问题 I have a custom loss layer which I wrote, this layer applies softmax and sigmoid activation to part of the bottom[0] blob. Ex: `bottom[0]` is of shape (say): `[20, 7, 7, 50]` (`NHWC` format) I would like to apply `softmax` to `[20, 7, 7, 25]` (first 25 channels) and `sigmoid` to `[20, 7, 7, 1]` (just one channel) and the remaining 24 channels are taken in as it is. How do I effectively allocate memory to the input blobs of these two softmax and sigmoid layers and also free this memory ? 回答1: