Effect of setting sequence_length on the returned state in dynamic_rnn

こ雲淡風輕ζ 提交于 2019-12-11 10:36:58

问题


Suppose I have an LSTM network to classify timeseries on length 10, the standard way to feed the timeseries to the LSTM is to form a [batch size X 10 X vector size] array and feed it to the LSTM:

self.rnn_t, self.new_state = tf.nn.dynamic_rnn( \
        inputs=self.X, cell=self.lstm_cell, dtype=tf.float32, initial_state=self.state_in)

When using the sequence_length parameter I can specify the length of the timeseries.

My question, for the scenario defined above, if I call dynamic_rnn 10 time with a vector of size [batch size X 1 X vector size], taking the matching index in the timeseries and passing the returned state as the initial_state of the preceding call, would I end up having the same results? outputs and state? or not?


回答1:


You should be getting the same output in both the cases. I'll illustrate this with a toy example below:

> 1. Setting up the inputs and the parameters of the network:

# Set RNN params
batch_size = 2
time_steps = 10
vector_size = 5

# Create a random input
dataset= tf.random_normal((batch_size, time_steps, vector_size), dtype=tf.float32, seed=42)

# input tensor to the RNN
X = tf.Variable(dataset, dtype=tf.float32)

> 2. Time series LSTM with input: [batch_size, time_steps, vector_size]

# Initializers cannot be set to random value, so set it a fixed value.
with tf.variable_scope('rnn_full', initializer=tf.initializers.ones()):
   basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)
   output_f, state_f= tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)

> 3. LSTM called in a loop count of time_steps to create tim_series, where each LSTM is fed an input: [batch_size, vector_size] and the returned state is set as the initial state

# Unstack the inputs across time_steps    
unstack_X = tf.unstack(X,axis=1)

outputs = []
with tf.variable_scope('rnn_unstacked', initializer=tf.initializers.ones()):
   basic_cell= tf.contrib.rnn.BasicRNNCell(num_units=10)

   #init_state has to be set to zero
   init_state = basic_cell.zero_state(batch_size, dtype=tf.float32)

   # Create a loop of N LSTM cells, N = time_steps.
   for i in range(len(unstack_X)):
      output, state= tf.nn.dynamic_rnn(basic_cell, tf.expand_dims(unstack_X[i], 1), dtype=tf.float32, initial_state= init_state)
      # copy the init_state with the new state
      init_state = state
      outputs.append(output)
   # Transform the output to [batch_size, time_steps, vector_size]        
   output_r = tf.transpose(tf.squeeze(tf.stack(outputs)), [1, 0, 2])

> 4. Checking the outputs

with tf.Session() as sess:
   sess.run(tf.global_variables_initializer())
   out_f, st_f =sess.run([output_f, state_f])
   out_r, st_r =sess.run([output_r, state])

   npt.assert_almost_equal(out_f, out_r)
   npt.assert_almost_equal(st_f, st_r)

Both the states and the outputs matches.



来源:https://stackoverflow.com/questions/50262174/effect-of-setting-sequence-length-on-the-returned-state-in-dynamic-rnn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!