recurrent-neural-network

TensorFlow: Remember LSTM state for next batch (stateful LSTM)

删除回忆录丶 提交于 2019-11-26 09:07:26
问题 Given a trained LSTM model I want to perform inference for single timesteps, i.e. seq_length = 1 in the example below. After each timestep the internal LSTM (memory and hidden) states need to be remembered for the next \'batch\'. For the very beginning of the inference the internal LSTM states init_c, init_h are computed given the input. These are then stored in a LSTMStateTuple object which is passed to the LSTM. During training this state is updated every timestep. However for inference I

Many to one and many to many LSTM examples in Keras

和自甴很熟 提交于 2019-11-26 04:58:57
问题 I try to understand LSTMs and how to build them with Keras. I found out, that there are principally the 4 modes to run a RNN (the 4 right ones in the picture) Image source: Andrej Karpathy Now I wonder how a minimalistic code snippet for each of them would look like in Keras. So something like model = Sequential() model.add(LSTM(128, input_shape=(timesteps, data_dim))) model.add(Dense(1)) for each of the 4 tasks, maybe with a little bit of explanation. 回答1: So: One-to-one : you could use a