new BigDecimal(double) vs new BigDecimal(String) [duplicate]

对着背影说爱祢 提交于 2019-12-17 16:48:06

问题


When BigDecimal is used with an input of double and BigDecimal with an input of String different results seem to appear.

BigDecimal a = new BigDecimal(0.333333333);
BigDecimal b = new BigDecimal(0.666666666);

BigDecimal c = new BigDecimal("0.333333333");
BigDecimal d = new BigDecimal("0.666666666");

BigDecimal x = a.multiply(b);
BigDecimal y = c.multiply(d);

System.out.println(x);
System.out.println(y);

x outputs as

0.222222221777777790569747304508155316795087227497352441864147715340493949298661391367204487323760986328125

while y is

0.222222221777777778

Am I wrong in saying that this is because of double imprecision? But since this is a BigDecimal, shouldn't it be the same?


回答1:


Am I wrong in saying that this is because of double imprecision?

You are absolutely right, this is exactly because of double's imprecision.

But since this is a BigDecimal, shouldn't it be the same?

No, it shouldn't. The error is introduced the moment you create new BigDecimal(0.333333333), because 0.333333333 constant already has an error embedded in it. At that point there is nothing you can do to fix this representation error: the proverbial horse is out of the barn by then, so it's too late to close the doors.

When you pass a String, on the other hand, the decimal representation matches the string exactly, so you get a different result.




回答2:


Yes, this is floating point error. The problem is that the literals 0.333333333 and 0.666666666 are represented as doubles before being passed as an argument to BigDecimal --- notably, BigDecimal's constructor takes a double as an argument.

This is supported by the standard, which says that floating point literals default to double unless otherwise specified.




回答3:


Java docs has its answer. According to Java docs of BigDecimal(double val)

The results of this constructor can be somewhat unpredictable. One might assume that writing new BigDecimal(0.1) in Java creates a BigDecimal which is exactly equal to 0.1 (an unscaled value of 1, with a scale of 1), but it is actually equal to 0.1000000000000000055511151231257827021181583404541015625. This is because 0.1 cannot be represented exactly as a double.




回答4:


When you define a double variable in any way, in most cases it won't be the value you have defined, but the closest possible binary representation. You are passing a double to the constructor, so already providing that small imprecision.



来源:https://stackoverflow.com/questions/29632454/new-bigdecimaldouble-vs-new-bigdecimalstring

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!