int i = 0, j = 0;
double nan1 = (double)0/0;
double nan2 = (double)0/0;
double nan3 = (double)i/j;
System.out.println(Double.doubleToRawLongBits(nan1) == Double.doub
The IEEE 754 standard allows different bit patterns for NaN. For computation and comparison purposes they should all work the same (i.e. NaN compares not equal to itself, is not ordered and every computation involving NaN is NaN itself). With doubleToRawLongBits you get the exact bit pattern used. This is also detailed in the JLS:
For the most part, the Java platform treats NaN values of a given type as though collapsed into a single canonical value (and hence this specification nor- mally refers to an arbitrary NaN as though to a canonical value). However, version 1.3 the Java platform introduced methods enabling the programmer to distinguish between NaN values: the
Float.floatToRawIntBitsandDouble.double- ToRawLongBitsmethods. The interested reader is referred to the specifications for theFloatandDoubleclasses for more information.
In your case the sign bit is different, in this case I can direct you to Wikipedia, which summarises this concisely:
In IEEE 754 standard-conforming floating point storage formats, NaNs are identified by specific, pre-defined bit patterns unique to NaNs. The sign bit does not matter.
Both your values are NaN, they just use different bits to represent that. Something that is allows by IEEE 754 and in this case probably stems from the compiler substituting Double.NaN for a constant computation that results in NaN while the actual hardware gives a different result, as suspected by Mysticial in a comment to the question already.