I\'ve encountered something a little confusing while trying to deal with a floating-point arithmetic problem.
First, the code. I\'ve distilled the essence of my problem
Its not GDB vs the processor, it's the memory vs the processor. The x64 processor stores more bits of accuracy than the memory actually holds (80ish vs 64 bits). As long as it stays in the CPU and registers, it retains 80ish bits of accuracy, but when it gets sent to memory will determine when and therefore how it gets rounded. If GDB sends all intermittent calculation results out of the CPU (I have no idea if this is the case, or anywhere close), it will do the rounding at every step, which leads to slightly different results.