问题
I have a custom loss layer which I wrote, this layer applies softmax
and sigmoid
activation to part of the bottom[0] blob.
Ex: `bottom[0]` is of shape (say): `[20, 7, 7, 50]` (`NHWC` format)
I would like to apply `softmax` to `[20, 7, 7, 25]` (first 25 channels) and
`sigmoid` to `[20, 7, 7, 1]` (just one channel) and the remaining 24 channels are taken in as it is.
How do I effectively allocate memory to the input blobs of these two softmax
and sigmoid
layers and also free this memory ?
回答1:
Instead of allocating data internally, you can simply use a "Slice" layer externally and slice the input blob using caffe "off-the-shelf" layers:
layer {
name: "slice"
type: "Slice"
bottom: "input_to_loss"
top: "to_softmax"
top: "to_sigmoid"
top: "leftovers"
slice_param {
axis: -1 # slice the last axis
slice_point: 25
slice_point: 26
}
}
回答2:
All the intermediate activations, along with the network's input and output blobs are managed by the net class and set up in the Net<Dtype>::Init
function in src/caffe/net.cpp
.
You don't need to allocate/deallocate the top and bottom blob memory from within the layer itself.
来源:https://stackoverflow.com/questions/44955498/layers-within-a-layer-in-caffe