On what exactly does the size of a primitive data type like int
depend on?
I think there are two parts to this question:
What sizes primitive types are allowed to be.
This is specified by the C and C++ standards: the types have allowed minimum value ranges they must have, which implicitly places a lower bound on their size in bits (e.g. long
must be at least 32 bit to comply with the standard).
The standards do not specify the size in bytes, because the definition of the byte is up to the implementation, e.g. char
is byte, but byte size (CHAR_BIT
macro) may be 16 bit.
The actual size as defined by the implementation.
This, as other answers have already pointed out, is dependent on the implementation: the compiler. And the compiler implementation, in turn, is heavily influenced by the target architecture. So it's plausible to have two compilers running on the same OS and architecture, but having different size of int
. The only assumption you can make is the one stated by the standard (given that the compiler implements it).
There also may be additional ABI requirements (e.g. fixed size of enums).