How to use Bidirectional RNN and Conv1D in keras when shapes are not matching?

放肆的年华 提交于 2019-12-02 08:22:05

You don't need to restructure anything at all to get the output of a Conv1D layer into an LSTM layer.

So, the problem is simply the presence of the Flatten layer, which destroys the shape.

These are the shapes used by Conv1D and LSTM:

  • Conv1D: (batch, length, channels)
  • LSTM: (batch, timeSteps, features)

Length is the same as timeSteps, and channels is the same as features.

Using the Bidirectional wrapper won't change a thing either. It will only duplicate your output features.


Classifying.

If you're going to classify the entire sequence as a whole, your last LSTM must use return_sequences=False. (Or you may use some flatten + dense instead after)

If you're going to classify each step of the sequence, all your LSTMs should have return_sequences=True. You should not flatten the data after them.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!