问题
I need to make computations in the highest possible precision, regardless if the arguments passed are integers, floats or whatever numbers. One way I can think of this is:
import numpy as np
def foo(x, y, z)
a = (np.float64)0
a = x + y * z
I can see a couple of problems with this: 1) I think I need to convert the inputs, not the result for this to work 2)looks ugly (the first operation is a superfluous C-style declaration).
How can I pythonically perform all calculations in the highest available precision, and then store the results in the highest available precision (which is IMO numpy.float64)?
回答1:
To me the obvious answer is Decimal, unless the function needs to be very fast.
import decimal
# set the precision to double that of float64.. or whatever you want.
decimal.setcontext(decimal.Context(prec=34))
def foo(x, y, z)
x,y,z = [decimal.Decimal(v) for v in (x,y,z)]
a = x + y * z
return a # you forgot this line in the original code.
If you want a conventional 64bit float, you can just convert the return value to that: return float(a)
回答2:
You can declare variable but You can try to force it to be expected type
import numpy as np
def foo(*args):
x, y, z = map(np.longdouble, args)
return x + y * z
foo(0.000001,0.000001, 0.00000000001)
来源:https://stackoverflow.com/questions/16892669/how-to-perform-precise-calculations-in-python-regardless-of-input-types