I am trying to figure out how to print floating point numbers without using library functions. Printing the decimal part of a floating point number turned out to be quite ea
Let's explain this one more time. After the integer part has been printed (exactly) without any rounding other than chop towards 0 it's time for the decimal bits.
Start with a string of bytes (say 100 for starters) containing binary zeros. If the first bit to the right of the decimal point in the fp value is set that means that 0.5 (2^-1 or 1/(2^1)is a component of the fraction. So add 5 to the first byte. If the next bit is set 0.25 (2^-2 or 1/(2^2)) is part of the fraction add 5 to the second byte and add 2 to the first (oh, don't forget the carry, they happen - lower school math). The next bit set means 0.125 so add 5 to the third byte, 2 to the second and 1 to the first. And so on:
value string of binary 0s
start 0 0000000000000000000 ...
bit 1 0.5 5000000000000000000 ...
bit 2 0.25 7500000000000000000 ...
bit 3 0.125 8750000000000000000 ...
bit 4 0.0625 9375000000000000000 ...
bit 5 0.03125 9687500000000000000 ...
bit 6 0.015625 9843750000000000000 ...
bit 7 0.0078125 9921875000000000000 ...
bit 8 0.00390625 9960937500000000000 ...
bit 9 0.001953125 9980468750000000000 ...
...
I did this by hand so I may have missed something but to implement this in code is trivial.
So for all those SO "can't get an exact result using float" people who don't know what they're talking about here is proof that floating point fraction values are perfectly exact. Excruciatingly exact. But binary.
For those who take the time to get their heads around how this works, better precision is well within reach. As for the others ... well I guess they'll keep on not browsing the fora for the answer to a question which has been answered numerous times previously, honestly believe they have discovered "broken floating point" (or whatever thay call it) and post a new variant of the same question every day.
"Close to magic," "dark incantation" - that's hilarious!