Why does the Concatenate Layer in Keras make the training very slow?

两盒软妹~` 提交于 2020-01-25 10:20:12

问题


This actually follows from my previous question How to apply convolution on the last three dimensions of a 5D tensor using the Conv2D in Keras?

I would like to 2D-convolute a layer with dimension batch_size * N * n * n * channel_size for each i in N. The output is expected to be batch_size * N * m * m * channel_size2. The weights for each i should be different. Following the answer of the previous question, I did the following:

set=[]
for i in range(N):
    conv = Conv2D(2,(4,4), strides = (4,4), activation = 'relu') \
    (Lambda(lambda x : x[:,i,:,:,:])(input_layer)) # split the tensor and apply the convolution
    resh = Reshape((1,4,4,2))(conv) #expand the dimension for concatenation
set.append(resh)


conv_layer = Concatenate(axis = 1)(set)

The code seems to be correct. But it has the following drawbacks:

  1. The summary report of the model becomes rather involved. It will list the layers for every i.
  2. The training of the network will become very slow (for N = 320), even though the number of weights is not extremely big. I'm not sure it is due to the code in the loop or due to the Concatenate Layer.

Any suggestions would be very appreciate.

来源:https://stackoverflow.com/questions/54093755/why-does-the-concatenate-layer-in-keras-make-the-training-very-slow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!