Floating point math in python / numpy not reproducible across machines

百般思念 提交于 2019-12-20 19:44:12

问题


Comparing the results of a floating point computation across a couple of different machines, they are consistently producing different results. Here is a stripped down example that reproduces the behavior:

import numpy as np
from numpy.random import randn as rand

M = 1024
N = 2048
np.random.seed(0)

a = rand(M,N).astype(dtype=np.float32)
w = rand(N,M).astype(dtype=np.float32)

b = np.dot(a, w)
for i in range(10):
    b = b + np.dot(b, a)[:, :1024]
    np.divide(b, 100., out=b)

print b[0,:3]

Different machines produce different results like

  • [ -2.85753540e-05 -5.94204867e-05 -2.62337649e-04]
  • [ -2.85751412e-05 -5.94208468e-05 -2.62336689e-04]
  • [ -2.85754559e-05 -5.94202756e-05 -2.62337562e-04]

but I can also get identical results, e.g. by running on two MacBooks of the same vintage. This happens with machines that have the same version of Python and numpy, but not necessarily linked against the same BLAS libraries (e.g accelerate framework on Mac, OpenBLAS on Ubuntu). However, shouldn't different numerical libraries all conform to the same IEEE floating point standard and give exactly the same results?


回答1:


Floating point calculations are not always reproducible.

You may get reproducible results for floating calculations across different machines if you use the same executable image, inputs, libraries built with the same compiler and identical compiler settings (switches).

However if you use a dynamically linked library you may get different results, because of numerous reasons. First of all, as Veedrac pointed in comments it might use different algorithms for its routines on different architectures. Second, a compiler might produce different code depending on switches (various optimizations, control settings). Even a+b+c yields non-deterministic results across machines and compilers, because we can not be sure about order of evaluation, precision in intermediate calculations.

Read here why it is not guaranteed to get identical results on different IEEE 754-1985 implementations. New standard (IEEE 754-2008) tries to go further, but it still doesn't guarantee identical results among different implementations, because for example it allows implementers to choose when tinyness (underflow exception) is detected

More information about floating point determinism can be found in this article.



来源:https://stackoverflow.com/questions/30065437/floating-point-math-in-python-numpy-not-reproducible-across-machines

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!