I am not sure if this non-standard way of stating a Stack Overflow question is good or bad, but here goes:
What is the best (mathematical or otherwise technical) explana
As for the difference in hashes it indeed seems to be wrong (same value, different hash) -> but it is answered already by LukeH in his comment.
As for the casting to double, though.. I see it that way:
42000000000000000000000
has different (and less 'precise') binary representation than 420000000000000000000000
and therefore you pay higher price for trying to round it.
Why it matters? Apparently decimal keeps track of its 'precision'. So for example it is storing 1m as 1*10^0
but its equivalent 1.000m as 1000*10^-3
. Most likely to be able to print it later as "1.000"
. Therefore when converting your decimal to double it's not 42 that you need to represent, but for example 420000000000000000 and this is far from optimal (mantissa and exponent are converted separately).
According to a simulator I have found (js one for Java, so not exactly what we may have for C# and therefore a bit different results, but meaningful):
42000000000000000000 ~ 1.1384122371673584 * 2^65 ~ 4.1999998e+19
420000000000000000000 = 1.4230153560638428 * 2^68 = 4.2e+20 (nice one)
4200000000000000000000 ~ 1.7787691354751587 * 2^71 ~ 4.1999999e+21
42000000000000000000000 ~ 1.111730694770813 * 2^75 ~ 4.1999998e+22
As you can see the value for 4.2E19 is less precise than for 4.2E20 and may end up being rounded to 4.19. If this is how the conversion to double happens then the result is not surprising. And since multiplying by 10, you'll usually encounter a number that is non-well-represented in binary, then we should expect such issues often.
Now to my mind its all the price for keeping trace of significant digits in decimal. If it was not important, we could always ex. normalize 4200*10^-2
to 4.2*10^1
(as double does it) and conversion to double wouldn't be that error-prone in context of hashcodes. If it's worth it? Not me to judge.
BTW: those 2 links provide nice reading about decimals binary representation: https://msdn.microsoft.com/en-us/library/system.decimal.getbits.aspx
https://msdn.microsoft.com/en-us/library/system.decimal.aspx