In the beginning of my code, (outside the scope of a Session
), I\'ve set my random seed -
np.random.seed(1)
tf.set_random_seed(1)
The answer was already provided in the comments, but no-one has written it explicitly yet, so here it is:
dynamic_rnn
will internally use tf.while_loop
, which can actually evaluate multiple iterations in parallel (see documentation on parallel_iterations
). In practice, if everything inside the loop-body or loop-cond depends on the previous values, it cannot run anything in parallel but there could be computations which don't depend on the previous values. These will be evaluated in parallel. In your case, inside the DropoutWrapper
, you have at some point sth like this:
random_ops.random_uniform(noise_shape, ...)
This operation is independent from the previous values of the loop, so it can be calculated in parallel for all time-steps. If you do such parallel execution, it will be non-deterministic which time-frame gets which dropout mask.