How to normalize a mantissa

后端 未结 5 1662
你的背包
你的背包 2020-12-09 05:15

I\'m trying to convert an int into a custom float, in which the user specifies the amount of bits reserved for the exp and mantissa, but I don\'t understand how

5条回答
  •  长情又很酷
    2020-12-09 06:03

    "Normalization process" converts the inputs into a select range.

    binary32 expects the significand (not mantissa) to be in the range 1.0 <= s < 2.0 unless the number has a minimum exponent.

    Example:
    value = 12, exp = 4 is the same as
    value = 12/(2*2*2), exp = 4 + 3
    value = 1.5, exp = 7

    Since the significand always has a leading digit of 1 (unless the number has a minimum exponent), there is no need to store it. And rather than storing the exponent as 7, a bias of 127 is added to it.

    value = 1.5 decimal --> 1.1000...000 binary --> 0.1000...000 stored binary (23 bits in all)
    exp = 7 --> bias exp 7 + 127 --> 134 decimal --> 10000110 binary

    The binary pattern stored is the concatenation of the "sign", "significand with a leading 1 bit implied" and a "bias exponent"

    0 10000110 1000...000 (1 + 8 + 23 = 32 bits)
    

    When the biased exponent is 0 - the minimum value, the implied bit is 0 and so small numbers like 0.0 can be stored.

    When the biased exponent is 255 - the maximum value, data stored no longer represents finite numbers but "infinity" and "Not-a-numbers".

    Check the referenced link for more details.

提交回复
热议问题