Everybody know, or at least, every programmers should know, that using the float
type could lead to precision errors. However, in some cases, an exact solution
So, my question is: is there a way to have a Decimal type with an infinite precision?
No, since storing an irrational number would require infinite memory.
Where Decimal
is useful is representing things like monetary amounts, where the values need to be exact and the precision is known a priori.
From the question, it is not entirely clear that Decimal
is more appropriate for your use case than float
.