Can someone please explain what\'s happening below: (I use Python 3.3)
1. >>> Decimal(\"0.1\") + Decimal(\"0.1\") + Decimal(\"0.1\") - Decimal(\"0.3\")
In a nutshell, neither 0.1 nor 0.3 can be represented exactly as float:
In [3]: '%.20f' % 0.1
Out[3]: '0.10000000000000000555'
In [4]: '%.20f' % 0.3
Out[4]: '0.29999999999999998890'
Consequently, when you use 0.1 or 0.3 to initialize Decimal(), the resulting value is approximately 0.1 or 0.3.
Using strings ("0.1" or "0.3") does not have this problem.
Finally, your second example produces a different result to your third example because, even though both involve implicit rounding, they involve rounding to a different number of decimal places.