I just learned from Peter Lawreys post that this is valid expression, and evaluates to true
.
333333333333333.33d == 333333333333333.3d
My question is, why is it allowed to have double literals which can't be represented in a double, while integer literals that can't be represented are disallowed. What is the rationale for this decision.
A side note, I can actually trigger out of range compile error for doubles literals :-)
99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999d
So as long as we're in (min, max) range, the literal gets approximated, but when going outside of that, it seems the compiler refuses to approximate it.
The problem is that very few decimals that you might type can be represented exactly as an IEEE float. So if you removed all non-exact constants you would make using double literals very unwieldy. Most of the time the behaviour of "pretend we can represent it" is far more useful.
The main reason is probably that Java simply can't tell when you're running out of precision because there is no CPU op code for that.
Why is there no CPU flag or similar? Because the representation of the number simply doesn't allow it. For example even simple numbers like "0.1" have no definite representation. 0.1 gives you "00111111 10111001 10011001 10011001 10011001 10011001 10011001 10011010" (see http://www.binaryconvert.com/result_double.html?decimal=048046049).
That value isn't precisely 0.1 but 1.00000000000000005551115123126E-1
.
So even for these "simple" cases, the code would have to throw an exception.
来源:https://stackoverflow.com/questions/7076653/why-is-arbitrary-precision-in-double-literals-allowed-in-java