I am trying to convert decimal number into its fraction. Decimal numbers will be having a maximum 4 digits after the decimal place. example:- 12.34 = 1234/100 12.3456 = 1234
#include
int main(void) {
double a = 12.34;
int c = 10000;
double b = (a - floor(a)) * c;
int d = (int)floor(a) * c + (int)(b + .5f);
printf("%f %d\n", b, d);
while(1) {
if(d % 10 == 0) {
d = d / 10;
c = c / 10;
}
else break;
}
printf("%d/%d\n", d, c);
return 0;
}
The problem is that b
was getting 3400.00 but when you do (int) b
you are getting 3399, so you need to add 0.5
so the number can truncate to 3400.
Getting 3400.00 is different than having 3400, 3400.00 means that the number was round to 3400, that's why when you do (int) 3400.00 it assumes that the nearest integer (less than the number you are converting) is 3399, however, when you add 0.5 to that number the last the nearest integer is now 3400.
If you want to acquire a deeper understanding of floating point arithmetic read What Every Computer Scientist Should Know About Floating-Point Arithmetic