Python Decimal vs C# decimal precision [duplicate]

梦想的初衷 提交于 2019-12-02 06:51:44

The main reason it fails with Python is because 4321.90 is interpreted as float (you lose precision at that point) and then casted to Decimal at runtime. With C# 4321.90m is interpreted as decimal to begin with. Python simply doesn't support decimals as a built-in structure.

But there's an easy way to fix that with Python. Simply use strings:

>>> Decimal('4321.90') * Decimal('100')
Decimal('432190.00')

I'm about to convert the initial value to string

Yes! (but don't do it by calling str - use a string literal)

and do the math with string manipulations

No!

When hardcoding a decimal value into your source code, you should initialize it from a string literal, not a float literal. With 4321.90, floating-point rounding has already occurred, and building a Decimal won't undo that. With "4321.90", Decimal has the original text you wrote available to perform an exact initialization:

Decimal('4321.90')

Floating point inaccuracy again.

Decimal(number) doesn't change a thing: the value is modified before it hits Decimal.

You can avoid that by passing strings to Decimal, though:

Decimal("4321.90") * Decimal("100")

result:

Decimal('432190.00')

(so Decimal handles the floating point numbers without using the floating point registers & operations at all)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!