Keras TimeDistributed layer with multiple inputs

久未见 提交于 2019-12-11 11:05:40

问题


I'm trying to make the following lines of code working:

low_encoder_out = TimeDistributed( AutoregressiveDecoder(...) )([X_tf, embeddings])

Where AutoregressiveDecoder is a custom layer that takes two inputs. After a bit of googling, the problem seems to be that the TimeDistributed wrapper doesn't accept multiple inputs. There are solutions that proposes to merge the two inputs before feeding it to the layer, but since their shape is

X_tf.shape: (?, 16, 16, 128, 5)
embeddings.shape: (?, 16, 1024)

I really don't know how to merge them. Is there a way of having the TimeDistributed layer to work with more than one input? Or, alternatively, is there any way to merge the two inputs in a nice way?


回答1:


As you mentioned TimeDistributed layer does not support multiple inputs. One (not-very-nice) workaround, considering the fact that the number of timesteps (i.e. second axis) must be the same for all the inputs, is to reshape all of them to (None, n_timsteps, n_featsN), concatenate them and then feed them as input of TimeDistributed layer:

X_tf_r = Reshape((n_timesteps, -1))(X_tf)
embeddings_r = Reshape((n_timesteps, -1))(embeddings)

concat = concatenate([X_tf_r, embeddings_r])
low_encoder_out = TimeDistributed(AutoregressiveDecoder(...))(concat)

Of course, you might need to modify the definition of your custom layer and separate the inputs back if necessary.



来源:https://stackoverflow.com/questions/52966175/keras-timedistributed-layer-with-multiple-inputs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!