tensorflow inference graph performance optimization
问题 I am trying to understand more about certain surprising results i see in implementing a tf graph . The graph i am working with is just a forest (bunch of trees). This is just a plain forward inference graph , and nothing related to training. I am sharing the snippets for 2 implementation code snippet 1: with tf.name_scope("main"): def get_tree_output(offset): loop_vars = (offset,) leaf_indice = tf.while_loop(cond, body, loop_vars, back_prop=False, parallel_iterations=1, name="while_loop")