Java Double value = 0.01 changes to 0.009999999999999787 [duplicate]

◇◆丶佛笑我妖孽 提交于 2019-11-28 13:24:37
Amir Raminfar

Using double for currency is a bad idea, Why not use Double or Float to represent currency?. I recommend using BigDecimal or doing every calculation in cents.

0.01 does not have an exact representation in floating-point (and neither do 0.1 nor 0.2, for that matter).

You should probably do all your maths with integer types, representing the number of pennies.

doubles aren't kept in decimal internally, but in binary. Their storage format is equivalent to something like "100101 multiplied by 10000" (I'm simplifying, but that's the basic idea). Unfortunately, there's no combination of these binary values that works out to exactly decimal 0.01, which is what the other answers mean when they say that floating point numbers aren't 100% accurate, or that 0.01 doesn't have an exact representation in floating point.

There are various ways of dealing with this problem, some more complicated than others. The best solution in your case is probably to use ints everywhere and keep the values in cents.

As the others already said, do not use doubles for financial calculations.

This paper http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html (What Every Computer Scientist Should Know About Floating-Point Arithmetic) is a must-read to understand floating point math in computers.

Floating point numbers are never 100% accurate (not quite true, see comments below). You should never compare them directly. Also integer rounding. The best way to do this would probably be to do it in cents and convert to dollars later (1 dollar == 100 cents). By converting to an integer you are losing precision.

its a float(double)

You should not use it to compute money....

I recommend using int values and operate on pennys

This is a problem that's arisen many times over. The bottom line is that on a computer that uses binary floating point (which Java requires), only fractions in which the denominator is a power of 2 can be represented precisely.

The same problem arises in decimal. 1/3, for example, turns into 0.3333333..., because 3 isn't a factor of 10 (the base we're using in decimal). Likewise 1/17, 1/19, etc.

In binary floating point, the same basic problem arises. The main difference is that in decimal, since 5 is a factor of 10, 1/5 can be represented precisely (and so can multiples of 1/5). Since 5 is not a factor of 2, 1/5 cannot be represented precisely in binary floating point.

Contrary to popular belief, however, some fractions can be represented precisely -- specifically those fractions whose denominators with only 2 as a prime factor (e.g., 1/8 or 1/256 can be represented precisely).

I'm sure you know that some fractions' decimal representations terminate (e.g. .01) while some don't (e.g. 2/3=.66666...). The thing is that which fractions terminate changes depending on what base you're in; in particular, .01 doesn't terminate in binary, so even though double provides a lot of precision it can't represent .01 exactly. As others said, using BigDecimal or fixed-point integer computations (converting everything to cents) is probably best for currency; to learn more about floating point, you could start at The Floating-Point Guide- What Every Programmer Should Know About Floating-Point Arithmetic.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!