Python Decimal doesn\'t support being constructed from float; it expects that you have to convert float to a string first.
This is very inconvenient since standard
You said in your question:
Can someone suggest a good way to convert from float to Decimal preserving value as the user has entered
But every time the user enters a value, it is entered as a string, not as a float. You are converting it to a float somewhere. Convert it to a Decimal directly instead and no precision will be lost.
The "right" way to do this was documented in 1990 by Steele and White's and Clinger's PLDI 1990 papers.
You might also look at this SO discussion about Python Decimal, including my suggestion to try using something like frap to rationalize a float.
The main answer is slightly misleading. The g
format ignores any leading zeroes after the decimal point, so format(0.012345, ".2g")
returns 0.012 - three decimal places. If you need a hard limit on the number of decimal places, use the f
formatter: format(0.012345, ".2f") == 0.01
Python does support Decimal creation from a float. You just cast it as a string first. But the precision loss doesn't occur with string conversion. The float you are converting doesn't have that kind of precision in the first place. (Otherwise you wouldn't need Decimal)
I think the confusion here is that we can create float literals in decimal format, but as soon as the interpreter consumes that literal the inner representation becomes a floating point number.
I suggest this
>>> a = 2.111111
>>> a
2.1111110000000002
>>> str(a)
'2.111111'
>>> decimal.Decimal(str(a))
Decimal('2.111111')
You can use JSON to accomplish it
import json
from decimal import Decimal
float_value = 123456.2365
decimal_value = json.loads(json.dumps(float_value), parse_float=Decimal)