We know that using double for currency is error-prone and not recommended. However, I\'m yet to see a realistic example, where BigDecimal
Suppose that you have 1000000000001.5 (it is in the 1e12 range) money. And you have to calculate 117% of it.
In double, it becomes 1170000000001.7549 (this number is already imprecise). Then apply your round algorithm, and it becomes 1170000000001.75.
In precise arithmetic, it becomes 1170000000001.7550, which is rounded to 1170000000001.76. Ouch, you lost 1 cent.
I think that it is a realistic example, where double is inferior to precise arithmetic.
Sure, you can fix this somehow (even, you can implement BigDecimal using double arihmetic, so in a way, double can be used for everything, and it will be precise), but what's the point?
You can use double for integer arithmetic, if numbers are less than 2^53. If you can express your math within these constraints, then calculation will be precise (division needs special care, of course). As soon as you leave this territory, your calculations can be imprecise.
As you can see, 53 bits is not enough, double is not enough. But, if you store money in decimal-fixed point number (I mean, store the number money*100, if you need cents precision), then 64 bits might be enough, so a 64-bit integer can be used instead of BigDecimal.