I\'d like to know if it is an easy way of determining the maximum number of characters to print a decimal int.
I know contains
If you assume CHAR_BIT is 8 (required on POSIX, so a safe assumption for any code targetting POSIX systems as well as any other mainstream system like Windows), a cheap safe formula is 3*sizeof(int)+2. If not, you can make it 3*sizeof(int)*CHAR_BIT/8+2, or there's a slightly simpler version.
In case you're interested in the reason this works, sizeof(int) is essentially a logarithm of INT_MAX (roughly log base 2^CHAR_BIT), and conversion between logarithms of different bases (e.g. to base 10) is just multiplication. In particular, 3 is an integer approximation/upper bound on log base 10 of 256.
The +2 is to account for a possible sign and null termination.