the precision of printf with specifier “%g”

后端 未结 3 1149
醉梦人生
醉梦人生 2020-12-14 02:30

Can anybody explain me how the [.precision] in printf works with specifier \"%g\"? I\'m quite confused by the following output:

double value = 3         


        
3条回答
  •  醉酒成梦
    2020-12-14 03:10

    %g uses the shortest representation.

    Floating-point numbers usually aren't stored as a number in base 10, but 2 (performance, size, practicality reasons). However, whatever the base of your representation, there will always be rational numbers that will not be expressible in some arbitrary size limit for the variable to store them.

    When you specify %.16g, you're saying that you want the shortest representation of the number given with a maximum of 16 significant digits.

    If the shortest representation has more than 16 digits, printf will shorten the number string by cutting cut the 2 digit at the very end, leaving you with 3122.550000000000, which is actually 3122.55 in the shortest form, explaining the result you obtained.

    In general, %g will always give you the shortest result possible, meaning that if the sequence of digits representing your number can be shortened without any loss of precision, it will be done.

    To further the example, when you use %.17g and the 17th decimal place contains a value different from 0 (2 in particular), you ended up with the full number 3122.5500000000002.

    My question is: why %.16g gives the exact number while %.17g can't?

    It's actually the %.17g which gives you the exact result, while %.16g gives you only a rounded approximate with an error (when compared to the value in memory).

    If you want a more fixed precision, use %f or %F instead.

提交回复
热议问题