I\'m trying to convert an int into a custom float, in which the user specifies the amount of bits reserved for the exp and mantissa, but I don\'t understand how
"Normalization process" converts the inputs into a select range.
binary32 expects the significand (not mantissa) to be in the range 1.0 <= s < 2.0 unless the number has a minimum exponent.
Example:
value = 12, exp = 4is the same as
value = 12/(2*2*2), exp = 4 + 3
value = 1.5, exp = 7
Since the significand always has a leading digit of 1 (unless the number has a minimum exponent), there is no need to store it. And rather than storing the exponent as 7, a bias of 127 is added to it.
value = 1.5 decimal --> 1.1000...000 binary --> 0.1000...000 stored binary (23 bits in all)
exp = 7 --> bias exp 7 + 127 --> 134 decimal --> 10000110 binary
The binary pattern stored is the concatenation of the "sign", "significand with a leading 1 bit implied" and a "bias exponent"
0 10000110 1000...000 (1 + 8 + 23 = 32 bits)
When the biased exponent is 0 - the minimum value, the implied bit is 0 and so small numbers like 0.0 can be stored.
When the biased exponent is 255 - the maximum value, data stored no longer represents finite numbers but "infinity" and "Not-a-numbers".
Check the referenced link for more details.