Why does GDB evaluate floating-point arithmetic differently from C++?

后端 未结 3 583
情深已故
情深已故 2021-02-13 01:46

I\'ve encountered something a little confusing while trying to deal with a floating-point arithmetic problem.

First, the code. I\'ve distilled the essence of my problem

3条回答
  •  轮回少年
    2021-02-13 02:25

    Its not GDB vs the processor, it's the memory vs the processor. The x64 processor stores more bits of accuracy than the memory actually holds (80ish vs 64 bits). As long as it stays in the CPU and registers, it retains 80ish bits of accuracy, but when it gets sent to memory will determine when and therefore how it gets rounded. If GDB sends all intermittent calculation results out of the CPU (I have no idea if this is the case, or anywhere close), it will do the rounding at every step, which leads to slightly different results.

提交回复
热议问题