Simple TensorFlow computation not reproducible on different systems (macOS, Colab, Azure)

纵然是瞬间 提交于 2019-12-11 19:49:18

问题


I am investigating the reproducibility of code in TensorFlow on my macOS machine, on Google Colab, and on Azure with Docker. I understand that I can set a graph-level seed and an operation-level seed. I am using eager mode (so no parallelism optimization) and no GPUs. I use 100x100 random draws from the unit normal and calculate their mean and standard deviation.

The test code below verifies that I am not using the GPU, that I am using Tensorflow 1.12.0 or the preview of TensorFlow 2, that the tensor if Float32, the first element of the random tensor (which has a different value if I set only the graph-level seed or also the operation-level seed), their mean and their standard deviation. I also set the random seed of NumPy, although I do not use it here:

import numpy as np
import tensorflow as tf


def tf_1():
    """Returns True if TensorFlow is version 1"""
    return tf.__version__.startswith("1.")


def format_number(n):
    """Returns the number string-formatted with 12 number after comma."""
    return "%1.12f" % n


def set_top_level_seeds():
    """Sets TensorFlow graph-level seed and Numpy seed."""
    if tf_1():
        tf.set_random_seed(0)
    else:
        tf.random.set_seed(0)
    np.random.seed(0)

def generate_random_numbers(op_seed=None):
    """Returns random normal draws"""

    if op_seed:
        t = tf.random.normal([100, 100], seed=op_seed)
    else:
        t = tf.random.normal([100, 100])  

    return t    

def generate_random_number_stats_str(op_seed=None):
    """Returns mean and standard deviation from random normal draws"""

    t = generate_random_numbers(op_seed = op_seed)

    mean = tf.reduce_mean(t)
    sdev = tf.sqrt(tf.reduce_mean(tf.square(t - mean)))

    return [format_number(n) for n in (mean, sdev)]


def generate_random_number_1_seed():
    """Returns a single random number with graph-level seed only."""
    set_top_level_seeds()
    num = generate_random_numbers()[0, 0]
    return num


def generate_random_number_2_seeds():
    """Returns a single random number with graph-level seed only."""
    set_top_level_seeds()
    num = generate_random_numbers(op_seed=1)[0, 0]
    return num


def generate_stats_1_seed():
    """Returns mean and standard deviation wtih graph-level seed only."""
    set_top_level_seeds()
    return generate_random_number_stats_str()


def generate_stats_2_seeds():
    """Returns mean and standard deviation with graph and operation seeds."""
    set_top_level_seeds()
    return generate_random_number_stats_str(op_seed=1)


class Tests(tf.test.TestCase):
    """Run tests for reproducibility of TensorFlow."""

    def test_gpu(self):
        self.assertEqual(False, tf.test.is_gpu_available())

    def test_version(self):
        self.assertTrue(tf.__version__ == "1.12.0" or
                        tf.__version__.startswith("2.0.0-dev2019"))

    def test_type(self):
        num_type = generate_random_number_1_seed().dtype
        self.assertEqual(num_type, tf.float32)

    def test_eager_execution(self):
        self.assertEqual(True, tf.executing_eagerly())

    def test_random_number_1_seed(self):
        num_str = format_number(generate_random_number_1_seed())
        self.assertEqual(num_str, "1.511062622070")

    def test_random_number_2_seeds(self):
        num_str = format_number(generate_random_number_2_seeds())
        self.assertEqual(num_str, "0.680345416069")

    def test_arithmetic_1_seed(self):
        m, s = generate_stats_1_seed()

        if tf_1():
            self.assertEqual(m, "-0.008264393546")
            self.assertEqual(s, "0.995371103287")
        else:
            self.assertEqual(m, "-0.008264398202")
            self.assertEqual(s, "0.995371103287")

    def test_arithmetic_2_seeds(self):
        m, s = generate_stats_2_seeds()

        if tf_1():
            self.assertEqual(m, "0.000620653736")
            self.assertEqual(s, "0.997191190720")
        else:
            self.assertEqual(m, "0.000620646286")
            self.assertEqual(s, "0.997191071510")


if __name__ == '__main__':

    tf.reset_default_graph()

    if tf_1():
        tf.enable_eager_execution()
        tf.logging.set_verbosity(tf.logging.ERROR)

    tf.test.main()

On my local machine, the tests pass with TensorFlow 1.12.0 or the preview version of TensorFlow 2 in a virtual environment where I installed Tensorflow with pip install tensorflow==1.12.0 or pip install tf-nightly-2.0-preview. Note that the first random draw is the same in both versions, so I presume that all random numbers are the same, yet the mean and standard deviation are different after 9 decimal places. So TensorFlow implements computations slightly differently in different versions.

On Google Colab, I replace the last command with import unittest; unittest.main(argv=['first-arg-is-ignored'], exit=False) (see this issue). All tests bar one pass: same random numbers, same mean and standard deviation with graph-level seed. The test that fails is the arithmetic of the mean with both graph-level seed and operation-level seed, with a difference starting at the ninth decimal place:

.F.......
======================================================================
FAIL: test_arithmetic_2_seeds (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "<ipython-input-7-16d0afebf95f>", line 109, in test_arithmetic_2_seeds
    self.assertEqual(m, "0.000620653736")
AssertionError: '0.000620654086' != '0.000620653736'
- 0.000620654086
?           ^^^
+ 0.000620653736
?           ^^^


----------------------------------------------------------------------
Ran 9 tests in 0.023s

FAILED (failures=1)

On Azure with a Standard_NV6 machine with NVIDIA GPU Cloud Image, and the following Dockerfile

FROM tensorflow/tensorflow:latest-py3
ADD tests.py .
CMD python tests.py

the tests fail for the arithmetic in both cases of a graph-level seed only and a graph-level and operation-level seed:

FF.......
======================================================================
FAIL: test_arithmetic_1_seed (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests.py", line 99, in test_arithmetic_1_seed
    self.assertEqual(m, "-0.008264393546")
AssertionError: '-0.008264395408' != '-0.008264393546'
- -0.008264395408
?              ^^
+ -0.008264393546
?            +  ^


======================================================================
FAIL: test_arithmetic_2_seeds (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests.py", line 109, in test_arithmetic_2_seeds
    self.assertEqual(m, "0.000620653736")
AssertionError: '0.000620655250' != '0.000620653736'
- 0.000620655250
+ 0.000620653736


----------------------------------------------------------------------
Ran 9 tests in 0.016s

FAILED (failures=2)

When the tests fail on Google Colab or Azure, they fail consistently with the same actual values for the mean, so I believe that the problem is not some other random seed that I could set.

To see if the problem is an implementation of TensorFlow on different systems, I test on Azure with a different image for TensorFlow (tensorflow/tensorflow:latest, without the -py3 tag), and the random numbers with a top-level seed are also different:

FF..F....
======================================================================
FAIL: test_arithmetic_1_seed (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests.py", line 99, in test_arithmetic_1_seed
    self.assertEqual(m, "-0.008264393546")
AssertionError: '0.001101632486' != '-0.008264393546'

======================================================================
FAIL: test_arithmetic_2_seeds (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests.py", line 109, in test_arithmetic_2_seeds
    self.assertEqual(m, "0.000620653736")
AssertionError: '0.000620655250' != '0.000620653736'

======================================================================
FAIL: test_random_number_1_seed (__main__.Tests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "tests.py", line 89, in test_random_number_1_seed
    self.assertEqual(num_str, "1.511062622070")
AssertionError: '-1.398459434509' != '1.511062622070'

----------------------------------------------------------------------
Ran 9 tests in 0.015s

How can I ensure reproducibility of TensorFlow computations on different systems?


回答1:


Precision in floating point calculation will depend on the library compilation options and system architecture details.

There are quite a few articles written on the difficulties of reliably comparing floating point numbers for equality. A search for 'floating point equality' will turn them up. One example is https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/



来源:https://stackoverflow.com/questions/54478877/simple-tensorflow-computation-not-reproducible-on-different-systems-macos-cola

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!