What is the purpose of max_digits10 and how is it different from digits10?

后端 未结 2 651
南笙
南笙 2020-12-03 05:39

I am confused about what max_digits10 represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for

2条回答
  •  旧巷少年郎
    2020-12-03 06:24

    In my opinion, it is explained sufficiently at the linked site (and the site for digits10):

    digits10 is the (max.) amount of "decimal" digits where numbers
    can be represented by a type in any case, independent of their actual value.
    A usual 4-byte unsigned integer as example: As everybody should know, it has exactly 32bit,
    that is 32 digits of a binary number.
    But in terms of decimal numbers?
    Probably 9.
    Because, it can store 100000000 as well as 999999999.
    But if take numbers with 10 digits: 4000000000 can be stored, but 5000000000 not.
    So, if we need a guarantee for minimum decimal digit capacity, it is 9.
    And that is the result of digits10.

    max_digits10 is only interesting for float/double... and gives the decimal digit count
    which we need to output/save/process... to take the whole precision
    the floating point type can offer.
    Theoretical example: A variable with content 123.112233445566
    If you show 123.11223344 to the user, it is not as precise as it can be.
    If you show 123.1122334455660000000 to the user, it makes no sense because
    you could omit the trailing zeros (because your variable can´t hold that much anyways)
    Therefore, max_digits10 says how many digits precision you have available in a type.

提交回复
热议问题