In a loop I am adding 0.10 till I reach the desired #, and getting the index. This is my code :
private static int getIndexOfUnits(float units) {
int index = -1;
float addup = 0.10f;
for(float i = 1.00f; i < units; i=(float)i+addup) {
index++;
System.out.println("I = " + i + " Index = " + index);
}
return index;
}
If the units passed is 5.7, the ourput I see is :
I = 1.0 Index = 0
I = 1.1 Index = 1
I = 1.2 Index = 2
I = 1.3000001 Index = 3
I = 1.4000001 Index = 4
I = 1.5000001 Index = 5
I = 1.6000001 Index = 6
I = 1.7000002 Index = 7
I = 1.8000002 Index = 8
I = 1.9000002 Index = 9
I = 2.0000002 Index = 10
I = 2.1000001 Index = 11
I = 2.2 Index = 12
I = 2.3 Index = 13
I = 2.3999999 Index = 14
I = 2.4999998 Index = 15
I = 2.5999997 Index = 16
I = 2.6999996 Index = 17
I = 2.7999995 Index = 18
I = 2.8999994 Index = 19
I = 2.9999993 Index = 20
I = 3.0999992 Index = 21
I = 3.199999 Index = 22
I = 3.299999 Index = 23
I = 3.399999 Index = 24
I = 3.4999988 Index = 25
I = 3.5999987 Index = 26
I = 3.6999986 Index = 27
I = 3.7999985 Index = 28
I = 3.8999984 Index = 29
I = 3.9999983 Index = 30
I = 4.0999985 Index = 31
I = 4.1999984 Index = 32
I = 4.2999983 Index = 33
I = 4.399998 Index = 34
I = 4.499998 Index = 35
I = 4.599998 Index = 36
I = 4.699998 Index = 37
I = 4.799998 Index = 38
I = 4.8999977 Index = 39
I = 4.9999976 Index = 40
I = 5.0999975 Index = 41
I = 5.1999974 Index = 42
I = 5.2999973 Index = 43
I = 5.399997 Index = 44
I = 5.499997 Index = 45
I = 5.599997 Index = 46
I = 5.699997 Index = 47
If the units is big number like 18.90 or 29.90, it gives wrong index. Index is normally 1 less then it should be. Initial only 0.10 was added but after 2.3, it gets 2.39999.... on adding 0.10 to it. I believe this is a matter of precision. How to handle it and make sure that I get the right index on big #'s also regardless of using float or double.
Any ideas !!!!
From the Floating-Point Guide:
Why don’t my numbers, like 0.1 + 0.2 add up to a nice round 0.3, and instead I get a weird result like 0.30000000000000004?
Because internally, computers use a format (binary floating-point) that cannot accurately represent a number like 0.1, 0.2 or 0.3 at all.
When the code is compiled or interpreted, your “0.1” is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.
You cnnot use float
or double
if you need numbers to add up exaclty. Use BigDecimal
instead.
The problem is that the format that float uses can't represent all decimal numbers, so precision is lost sometimes.
Use BigDecimal instead.
Exact representation is not available with Float datatype.
BigDecimal will help you.Below example might be helpful for you
BigDecimal decimal=new BigDecimal(10.0);
for (int i = 0; i < 10; i++) {
decimal=decimal.add(new BigDecimal(.1));
System.out.println(decimal.floatValue());
}
You shouldn't use float's for this kind of thing (indexing / iterating).
Try calculating I every time:
I = 1.0f + (i*addup);
And you'll not accumulate floating-point errors.
I think this may be what you're looking for:
Java provides a class from the import package: import java.text.DecimalFormat called DecimalFormat. Its signature is:
DecimalFormat myFormat = new DecimalFormat("0.0");
It takes a String argument where you specify how you want the formatting to be displayed.
Here's how you can apply it to your code:
DecimalFormat myFormat;
private static int getIndexOfUnits(float units) {
myFormat = new DecimalFormat("0.0");
int index = -1;
float addup = 0.10f;
for(float i = 1.00f; i < units; i=(float)i+addup) {
index++;
System.out.println("I = " + myFormat.format(i) + " Index = " + index);
}
return index;
}
In your println, you can see that the DecimalFormat's class format()
method is called on float i
using the myFormat object reference to DecimalFormat - this is where the formatting takes place.
来源:https://stackoverflow.com/questions/10481156/double-or-float-datatype-doesnt-addup-properly-in-a-loop