An error trap trips after running a physics simulation for about 20 minutes. Realising this would be a pain to debug, I duplicated the relevant subroutine in a new project, and
Compiled your sample with Visual C++. I can confirm the output is slightly different from what you see in the debugger, here’s mine:
CAV: 4594681439063077250, 4603161398996347097, 4605548671330989714
T1: -4626277815076045984, -4637257536736295424, 4589609575355367200
CP: 4589838838395290724, -4627337114727508684, 4592984408164162561
I don’t know for sure what might cause the difference, but here’s an idea.
Since you’ve already looked at the machine code, what are you compiling into, legacy x87 or SSE? I presume it’s SSE, most compilers target this by default, for years already. If you pass -march native to gcc, very likely your CPU has some FMA instruction set (AMD since late 2011, Intel since 2013). Therefore, your GCC compiler used these _mm_fmadd_pd / _mm_fmsub_pd intrinsics, causing your 1-bit difference.
However, that’s all theory. My advice is, instead of trying to find out what caused that difference, you should fix your outer code.
It’s a bad idea to trap to debugger as the result of condition like this.
The numerical difference is very small. That’s the least significant bit in a 52-bit mantissa, i.e. the error is just 2^(-52).
Even if you’ll find out what caused that, disable e.g. FMA or some other thing that caused the issue, doing that is fragile, i.e. you’re going to support that hack through the lifetime of the project. You’ll upgrade your compiler, or the compiler will decide to optimize your code differently, or even you’ll upgrade the CPU — your code may break in the similar way.
Better approach, just stop comparing floating point numbers for exact equality. Instead, calculate e.g. absolute difference, and compare that with a small enough constant.