问题
This actually follows from my previous question How to apply convolution on the last three dimensions of a 5D tensor using the Conv2D in Keras?
I would like to 2D-convolute a layer with dimension batch_size * N * n * n * channel_size
for each i in N. The output is expected to be batch_size * N * m * m * channel_size2
. The weights for each i should be different. Following the answer of the previous question, I did the following:
set=[]
for i in range(N):
conv = Conv2D(2,(4,4), strides = (4,4), activation = 'relu') \
(Lambda(lambda x : x[:,i,:,:,:])(input_layer)) # split the tensor and apply the convolution
resh = Reshape((1,4,4,2))(conv) #expand the dimension for concatenation
set.append(resh)
conv_layer = Concatenate(axis = 1)(set)
The code seems to be correct. But it has the following drawbacks:
- The summary report of the model becomes rather involved. It will list the layers for every i.
- The training of the network will become very slow (for N = 320), even though the number of weights is not extremely big. I'm not sure it is due to the code in the loop or due to the Concatenate Layer.
Any suggestions would be very appreciate.
来源:https://stackoverflow.com/questions/54093755/why-does-the-concatenate-layer-in-keras-make-the-training-very-slow