How to lay out training data with stateful LSTMs and batch_size > 1
问题 Background I would like to do mini-batch training of "stateful" LSTMs in Keras. My input training data is in a large matrix "X" whose dimensions are m x n where m = number-of-subsequences n = number-of-time-steps-per-sequence Each row of X contains a subsequence which picks up where the subsequence on the preceding row leaves off. So given a long sequence of data, Data = ( t01, t02, t03, ... ) where "tK" means the token at position K in the original data, the sequence is layed out in X like