Different math rounding behaviour between Linux, Mac OS X and Windows

前端 未结 4 1266
佛祖请我去吃肉
佛祖请我去吃肉 2020-12-19 07:25

HI,

I developed some mixed C/C++ code, with some intensive numerical calculations. When compiled in Linux and Mac OS X I get very similar results after the simulatio

4条回答
  •  感情败类
    2020-12-19 08:26

    The IEEE and C/C++ standards leave some aspects of floating-point math unspecified. Yes, the precise result of adding to floats is determined, but any more complicated calculation is not. For instance, if you add three floats then the compiler can do the evaluation at float precision, double precision, or higher. Similarly, if you add three doubles then the compiler may do the evaluation at double precision or higher.

    VC++ defaults to setting the x87 FPUs precision to double. I believe that gcc leaves it at 80-bit precision. Neither is clearly better, but they can easily give different results, especially if there is any instability in your calculations. In particular 'tiny + large - large' may give very different results if you have extra bits of precision (or if the order of evaluation changes). The implications of varying intermediate precision are discussed here:

    http://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/

    The challenges of deterministic floating-point are discussed here:

    http://randomascii.wordpress.com/2013/07/16/floating-point-determinism/

    Floating-point math is tricky. You need to find out when your calculations diverge and examine the generated code to understand why. Only then can you decide what actions to take.

提交回复
热议问题