I was playing around with Tensorflow and ran into a problem with this code:
def process_tree_tf(matrix, weights, idxs, name=None):
with tf.name_scope(na
This is actually a subtle issue with tf.Variable objects in TensorFlow's tf.while_loop(). TensorFlow becomes confused because it appears that the tf.constant() with which you're initializing the variable is a value created inside the loop (even though it's clearly loop invariant), but all variables get hoisted outside the loop. The easiest resolution is to move the creation of the variable outside the loop:
def process_tree_tf(matrix, weights, idxs, name=None):
with tf.name_scope(name, "process_tree", [tree, weights, idxs]).as scope():
loop_index = tf.sub(tf.shape(matrix)[0], 1)
loop_vars = loop_index, matrix, idxs, weights
# Define the bias variable outside the loop to avoid problems.
bias = tf.Variable(tf.constant(0.1, [2], dtype=tf.float64))
def loop_condition(loop_idx, *_):
return tf.greater(loop_idx, 0)
def loop_body(loop_idx, mat, idxs, weights):
x = mat[loop_idx]
w = weights
# You can still refer to `bias` in here, and the loop body
# will capture it appropriately.
...
return loop_idx-1, mat, idxs, weights
return tf.while_loop(loop_condition, loop_body, loop_vars, name=scope)[1]
(Another possible resolution would be to use a tf.constant_initializer() rather than a tf.constant() when creating the variable.)