the precision of printf with specifier “%g”

后端 未结 3 1147
醉梦人生
醉梦人生 2020-12-14 02:30

Can anybody explain me how the [.precision] in printf works with specifier \"%g\"? I\'m quite confused by the following output:

double value = 3         


        
3条回答
  •  星月不相逢
    2020-12-14 03:06

    The decimal representation 3122.55 cannot be exactly represented by binary floating point representation.

    A double precision binary floating point value can represent approximately 15 significant figures (note not decimal places) of a decimal value correctly; thereafter the digits may not be the same, and at the extremes do not even have any real meaning and will be an artefact of the conversion from the floating point representation to a string of decimal digits.

    I've learned that %g uses the shortest representation.

    The rule is:

    Where P is the precision (or 6 if no precision specified or 1 if precision is zero), and X is the decimal exponent required for E/e style notation then:

    • if P > X ≥ −4, the conversion is with style f or F and precision P − 1 − X.
    • otherwise, the conversion is with style e or E and precision P − 1.

    The modification of precision for %g results in the different output of:

    printf("%.16g\n", value); //output: 3122.55
    printf("%.16e\n", value); //output: 3.1225500000000002e+03
    printf("%.16f\n", value); //output: 3122.5500000000001819
    

    despite having the same precision in the format specifier.

提交回复
热议问题