I came to know about the accuracy issues when I executed the following following program:
public static void main(String args[])
{
double table[
This is inherent in using floating-point numbers, in any language. Actually, it's inherent in using any representation with a fixed maximum precision.
There are several solutions. One is to use an extended-precision math package -- BigDecimal
is often suggested for Java. BigDecimal
can handle many more digits of precision, and also -- because it's a decimal representation rather than a 2's-complement representation -- tends to round off in ways that are less surprising to humans who are used to working in base 10. (That doesn't necessarily make them more correct, please note. Binary can't represent 1/3 exactly, but neither can decimal.)
There are also extended-precision 2's-complement floating-point representations. Java directly supports float and double (which are usually also supported by the hardware), but it's possible to write versions which support more digits of accuracy.
Of course any of the extended-precision packages will slow down your computations. So you shouldn't resort to them unless you actually need them.
Another may to use fixed-point binary rather than floating point. For example, the standard solution for most financial calculations is simply to compute in terms of the smallest unit of currency -- pennies, in the US -- in integers, converting to and from the display format (eg dollars and cents) only for I/O. That's also the approach used for time in Java -- the internal clock reports an integer number of milliseconds (or nanoseconds, if you use the nanotime call), which gives both more than sufficient precision and a more than sufficient range of values for most practical purposes. Again, this means that roundoff tends to happen in a way that matches human expectations... and again, that's less about accuracy than about not surprising the users. And these representations, because they process as integers or longs, allow fast computation -- faster than floating point, in fact.
There are yet other solutions which involve computing in rational numbers, or other variations, in an attempt to compromise between computational cost and precision.
But I also have to ask... Do you really NEED more precision than float is giving you? I know the roundoff is surprising, but in many cases it's perfectly acceptable to just let it happen, possibly rounding off to a less surprising number of fractional digts when you display the results to the user. In many cases, float or double are Just Fine for real-world use. That's why the hardware supports them, and that's why they're in the language.