Different math rounding behaviour between Linux, Mac OS X and Windows

前端 未结 4 1263
佛祖请我去吃肉
佛祖请我去吃肉 2020-12-19 07:25

HI,

I developed some mixed C/C++ code, with some intensive numerical calculations. When compiled in Linux and Mac OS X I get very similar results after the simulatio

相关标签:
4条回答
  • 2020-12-19 08:06

    I can't speak to the implementation in Windows, but Intel chips contain 80-bit floating point registers, and can give greater precision than that specified in the IEEE-754 floating point standard. You can try calling this routine in the main() of your application (on Intel chip platforms):

    inline void fpu_round_to_IEEE_double()
    {
       unsigned short cw = 0;
       _FPU_GETCW(cw);        // Get the FPU control word
       cw &= ~_FPU_EXTENDED;  // mask out '80-bit' register precision
       cw |= _FPU_DOUBLE;     // Mask in '64-bit' register precision
       _FPU_SETCW(cw);        // Set the FPU control word
    }
    

    I think this is distinct from the rounding modes discussed by @Alok.

    0 讨论(0)
  • 2020-12-19 08:13

    In addition to the runtime rounding settings that people mentioned, you can control the Visual Studio compiler settings in Properties > C++ > Code Generation > Floating Point Model. I've seen cases where setting this to "Fast" may cause some bad numerical behavior (e.g. iterative methods fail to converge).

    The settings are explained here: http://msdn.microsoft.com/en-us/library/e7s85ffb%28VS.80%29.aspx

    0 讨论(0)
  • 2020-12-19 08:26

    The IEEE and C/C++ standards leave some aspects of floating-point math unspecified. Yes, the precise result of adding to floats is determined, but any more complicated calculation is not. For instance, if you add three floats then the compiler can do the evaluation at float precision, double precision, or higher. Similarly, if you add three doubles then the compiler may do the evaluation at double precision or higher.

    VC++ defaults to setting the x87 FPUs precision to double. I believe that gcc leaves it at 80-bit precision. Neither is clearly better, but they can easily give different results, especially if there is any instability in your calculations. In particular 'tiny + large - large' may give very different results if you have extra bits of precision (or if the order of evaluation changes). The implications of varying intermediate precision are discussed here:

    http://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/

    The challenges of deterministic floating-point are discussed here:

    http://randomascii.wordpress.com/2013/07/16/floating-point-determinism/

    Floating-point math is tricky. You need to find out when your calculations diverge and examine the generated code to understand why. Only then can you decide what actions to take.

    0 讨论(0)
  • 2020-12-19 08:27

    There are four different types of rounding for floating-point numbers: round toward zero, round up, round down, and round to the nearest number. Depending upon compiler/operating system, the default may be different on different systems. For programmatically changing the rounding method, see fesetround. It is specified by C99 standard, but may be available to you.

    You can also try -ffloat-store gcc option. This will try to prevent gcc from using 80-bit floating-point values in registers.

    Also, if your results change depending upon the rounding method, and the differences are significant, it means that your calculations may not be stable. Please consider doing interval analysis, or using some other method to find the problem. For more information, see How Futile are Mindless Assessments of Roundoff in Floating-Point Computation? (pdf) and The pitfalls of verifying floating-point computations (ACM link, but you can get PDF from a lot of places if that doesn't work for you).

    0 讨论(0)
提交回复
热议问题