Using “Decimal” in Python

柔情痞子 提交于 2020-01-21 13:27:07

问题


Can someone please explain what's happening below: (I use Python 3.3)

1. >>> Decimal("0.1") + Decimal("0.1") + Decimal("0.1") - Decimal("0.3")
       Decimal('0.0')

2. >>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) - Decimal(0.3)
       Decimal('2.775557561565156540423631668E-17')

3. >>> Decimal(0.1 + 0.1 + 0.1 - 0.3)
       Decimal('5.5511151231257827021181583404541015625E-17')

I know it has to do with floating point limitation, I'd be glad if someone can explain why

  • What has the " " got to do with the difference between example 1 and 2 above
  • Why does 2 produce a difference answer from 3 given that both have no " "?

回答1:


In a nutshell, neither 0.1 nor 0.3 can be represented exactly as float:

In [3]: '%.20f' % 0.1
Out[3]: '0.10000000000000000555'

In [4]: '%.20f' % 0.3
Out[4]: '0.29999999999999998890'

Consequently, when you use 0.1 or 0.3 to initialize Decimal(), the resulting value is approximately 0.1 or 0.3.

Using strings ("0.1" or "0.3") does not have this problem.

Finally, your second example produces a different result to your third example because, even though both involve implicit rounding, they involve rounding to a different number of decimal places.



来源:https://stackoverflow.com/questions/14572101/using-decimal-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!