Can anybody explain me how the [.precision] in printf works with specifier \"%g\"? I\'m quite confused by the following output:
double value = 3
The decimal representation 3122.55 cannot be exactly represented by binary floating point representation.
A double precision binary floating point value can represent approximately 15 significant figures (note not decimal places) of a decimal value correctly; thereafter the digits may not be the same, and at the extremes do not even have any real meaning and will be an artefact of the conversion from the floating point representation to a string of decimal digits.
I've learned that %g uses the shortest representation.
The rule is:
Where P is the precision (or 6 if no precision specified or 1 if precision is zero), and X is the decimal exponent required for E/e style notation then:
The modification of precision for %g results in the different output of:
printf("%.16g\n", value); //output: 3122.55
printf("%.16e\n", value); //output: 3.1225500000000002e+03
printf("%.16f\n", value); //output: 3122.5500000000001819
despite having the same precision in the format specifier.