tf.constant and tf.placeholder behave differently

前端 未结 1 914
温柔的废话
温柔的废话 2021-01-25 08:24

I want to wrap the tf.metrics around a Sonnet module for measuring performance of each batch, and the following is the work I have done:

import tensorflow as tf
         


        
相关标签:
1条回答
  • 2021-01-25 09:00

    Are you sure it is not the tf.constant version that fails? I find tf.metrics having a weird behavior in combination with tf.constant:

    import tensorflow as tf
    
    a = tf.constant(1.)
    mean_a, mean_a_uop = tf.metrics.mean(a)
    with tf.control_dependencies([mean_a_uop]):
      mean_a = tf.identity(mean_a)
    
    sess = tf.InteractiveSession()
    tf.global_variables_initializer().run()
    tf.local_variables_initializer().run()
    
    for _ in range(10):
      print(sess.run(mean_a))
    

    returns, when run on the GPU,

    0.0
    2.0
    1.5
    1.3333334
    1.25
    1.2
    1.1666666
    1.1428572
    1.125
    1.1111112
    

    instead of 1s. It looks as if the count is lagging by one. (I am assuming the first value would be inf but is zero due to some conditions on count). A placeholder version of this code is running as expected on the other hand.

    On the CPU, the behavior is even weirder, as the output is non-deterministic. Example of output:

    0.0
    1.0
    1.0
    0.75
    1.0
    1.0
    0.85714287
    0.875
    1.0
    0.9
    

    Looks like a bug you could log on tensorflow's github repo. (Note that using running metrics on constants is less than useful -- but it is still a bug).

    EDIT Now I also stumbled on weird examples with a tf.placeholder, it seems that tf.metrics has a bug that is unfortunately not limited to its use with tf.constants.

    0 讨论(0)
提交回复
热议问题