(I apologize if this is the wrong place to ask this. I think it\'s definitely programming related, though if this belongs on some other site please let me know)
I grew u
I can think of an algorithm (although I feel sorry for whoever might have written it):
Assume the input is a 32-bit decimal digit in hex notation, little endian (e.g. 0x56 0x34 0x12 0x00).
Now loop through every byte, while you haven't reached a zero byte. (This should never happen, if 0x999999 is indeed guaranteed to be the max... but alas, it's not.)
On every loop, calculate the actual value and write the data back into the integer (or into some other buffer, where you do a "loop-while" rather than something like "for i = 0 to 4").
You can see how you can get a glitch, if your value doesn't have 0x00 at the end (i.e. the 32-bit "decimal" integer is larger than 0x999999).
Of course, this is a rather obscure way of calculating the value, but I think it's quite possible that someone did a while/do-while loop rather than a bounded for
loop for this.
At first I thought this would have the "advantage" of allowing the string to be shown directly to the user (since it would be null-terminated), but of course that doesn't work with little endian. They could've done something similar with big endian, but that would require a backwards loop to overflow, which I find to be a less likely mistake for someone to make.
Perhaps it was a compiler optimization due to undefined behavior (which the programmer was unaware of, like an invalid pointer cast or an aliasing issue)?