I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# progra
I suggest that, if you're truly interested, you take a look any one of a number of pages that discuss floating point numbers, some in gory detail. You will soon realize that, in a computer, they're a compromise, trading off accuracy for range. If you are going to be writing programs that use them, you do need to understand their limitations and problems that can arise if you don't take care. It will be worth your time.