I was working on this program and I noticed that using %f for a double and %d for a float gives me something completely different. Anybody knows why this happens?
%d
stands for decimal and it expects an argument of type int
(or some smaller signed integer type that then gets promoted). Floating-point types float
and double
both get passed the same way (promoted to double
) and both of them use %f
. In C99 you can also use %lf
to signify the larger size of double
, but this is purely cosmetic (notice that with scanf
no promotion occurs and this actually makes a difference).