I am having a bit of trouble understanding how the precision of these doubles affects the outcome of arithmetic operations in Matlab. I thought that since both a & b are
64-bit IEEE-754 floating point numbers have enough precision (with a 53 bit mantissa) to represent about 16 significant decimal digits. But it requires more like 45 significant decimal digits
to tell the difference between (1+a) = 1.00....000122 and 1.000 for your example.