the precision of printf with specifier “%g”

后端 未结 3 1121
醉梦人生
醉梦人生 2020-12-14 02:30

Can anybody explain me how the [.precision] in printf works with specifier \"%g\"? I\'m quite confused by the following output:

double value = 3         


        
3条回答
  •  野趣味
    野趣味 (楼主)
    2020-12-14 03:09

    The decimal value 3122.55 can't be exactly represented in binary floating point. When you write

    double value = 3122.55;
    

    you end up with the closest possible value that can be exactly represented. As it happens, that value is exactly 3122.5500000000001818989403545856475830078125.

    That value to 16 significant figures is 3122.550000000000. To 17 significant figures, it's 3122.5500000000002. And so those are the representations that %.16g and %.17g give you.

    Note that the nearest double representation of a decimal number is guaranteed to be accurate to at least 15 decimal significant figures. That's why you need to print to 16 or 17 digits to start seeing these apparent inaccuracies in your output in this case - to any smaller number of significant figures, the double representation is guaranteed to match the original decimal number that you typed.

    One final note: you say that

    I've learned that %g uses the shortest representation.

    While this is a popular summary of how %g behaves, it's also wrong. See What precisely does the %g printf specifier mean? where I discuss this at length, and show an example of %g using scientific notation even though it's 4 characters longer than not using scientific notation would've been.

提交回复
热议问题