I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# progra
Try performing repeated operations on an irrational number (such as a square root) or very long length repeating fraction. You'll quickly see errors accumulate. For instance, compute 1000000*Sqrt(2) vs. Sqrt(2)+Sqrt(2)+...+Sqrt(2).