I\'ve encountered something a little confusing while trying to deal with a floating-point arithmetic problem.
First, the code. I\'ve distilled the essence of my problem
Could be because the x86 FPU works in registers to 80 bits accuracy, but rounds to 64 bits when the value is stored to memory. GDB will be storing to memory on every step of the (interpreted) computation.