How to use masking layer to mask input/output in LSTM autoencoders?
问题 I am trying to use LSTM autoencoder to do sequence-to-sequence learning with variable lengths of sequences as inputs, using following code: inputs = Input(shape=(None, input_dim)) masked_input = Masking(mask_value=0.0, input_shape=(None,input_dim))(inputs) encoded = LSTM(latent_dim)(masked_input) decoded = RepeatVector(timesteps)(encoded) decoded = LSTM(input_dim, return_sequences=True)(decoded) sequence_autoencoder = Model(inputs, decoded) encoder = Model(inputs, encoded) where inputs are