According to K&R C section 1.6, a char
is a type of integer. So why do we need %c
. And why can\'t we use %d
for everything?
While it's an integer, the %c
interprets its numeric value as a character value for display. For instance for the character a
:
If you used %d
you'd get an integer, e.g., 97, the internal representation of the character a
vs
using %c
to display the character 'a
' itself (if using ASCII)
I.e., it's a matter of internal representation vs interpretation for external purposes (such as with printf
)