I just read a book about javascript. The author mentioned a floating point arithmetic rounding error in the IEEE 754 standard.
For example adding 0.1 and 0.2 yields
The closest representations of those three numbers in double precision floating point are:
The next larger representable number beyond 0.29999999999999999 is:
The closest representation of
So you are comparing 0.29999999999999999 and 0.30000000000000004. Does this give you more insight as to what is happening?
As far as the use of decimal instead of binary representations, that doesn't work either. Take for example one third:
which has no exact representation even using decimal digits. Any computations should always take representation error into account.