How to correctly give inputs to Embedding, LSTM and Linear layers in PyTorch?
问题 I need some clarity on how to correctly prepare inputs for batch-training using different components of the torch.nn module. Specifically, I'm looking to create an encoder-decoder network for a seq2seq model. Suppose I have a module with these three layers, in order: nn.Embedding nn.LSTM nn.Linear nn.Embedding Input: batch_size * seq_length Output: batch_size * seq_length * embedding_dimension I don't have any problems here, I just want to be explicit about the expected shape of the input and