I\'ve read a lot about floats, but it\'s all unnecessarily involved. I think I\'ve got it pretty much understood, but there\'s just one thing I\'d like to know for
One point not yet mentioned is that semantically, a floating-point number may be best regarded as representing a range of values. The range of values has a very precisely-defined center point, and the IEEE spec generally requires that the result of a floating-point computation be the number whose range contains the point one would get operating upon the center-points of the original numbers, but in the sequence:
double N1 = 0.1; float N2 = (float)N1; double N3 = N2;
N2 is the unambiguous correct single-precision representation of the value that had been represented in N1, despite the language's silly requirement to use an explicit cast. N3 will represent one of the values that N2 could represent (the language spec happens to choose the double value whose range is centered upon the middle of the range of the float). Note that while N2 represents the value of its type whose range contains the correct value, N3 does not.
Incidentally, conversion of a number from a string to a float in .net and .net languages seems to go through an intermediate conversion to double, which may sometimes alter the value. For example, even though the value 13571357 is representable as a single-precision float, the value 13571357.499999999069f gets rounded to 13571358 (even though it's obviously closer to 13571357).