I know a little bit about how floating-point numbers are represented, but not enough, I\'m afraid.
The general question is:
For a given preci
The precision quoted form Peter R's link to the MSDN ref is probably a good rule of thumb, but of course reality is more complicated.
The fact that the "point" in "floating point" is a binary point and not decimal point has a way of defeating our intuitions. The classic example is 0.1, which needs a precision of only one digit in decimal but isn't representable exactly in binary at all.
If you have a weekend to kill, have a look at What Every Computer Scientist Should Know About Floating-Point Arithmetic. You'll probably be particularly interested in the sections on Precision and Binary to Decimal Conversion.