Double vs. BigDecimal?

前端 未结 6 2417
梦谈多话
梦谈多话 2020-11-22 02:19

I have to calculate some floating point variables and my colleague suggest me to use BigDecimal instead of double since it will be more precise. Bu

6条回答
  •  一整个雨季
    2020-11-22 02:40

    BigDecimal is Oracle's arbitrary-precision numerical library. BigDecimal is part of the Java language and is useful for a variety of applications ranging from the financial to the scientific (that's where sort of am).

    There's nothing wrong with using doubles for certain calculations. Suppose, however, you wanted to calculate Math.Pi * Math.Pi / 6, that is, the value of the Riemann Zeta Function for a real argument of two (a project I'm currently working on). Floating-point division presents you with a painful problem of rounding error.

    BigDecimal, on the other hand, includes many options for calculating expressions to arbitrary precision. The add, multiply, and divide methods as described in the Oracle documentation below "take the place" of +, *, and / in BigDecimal Java World:

    http://docs.oracle.com/javase/7/docs/api/java/math/BigDecimal.html

    The compareTo method is especially useful in while and for loops.

    Be careful, however, in your use of constructors for BigDecimal. The string constructor is very useful in many cases. For instance, the code

    BigDecimal onethird = new BigDecimal("0.33333333333");

    utilizes a string representation of 1/3 to represent that infinitely-repeating number to a specified degree of accuracy. The round-off error is most likely somewhere so deep inside the JVM that the round-off errors won't disturb most of your practical calculations. I have, from personal experience, seen round-off creep up, however. The setScale method is important in these regards, as can be seen from the Oracle documentation.

提交回复
热议问题