I occasionally will come across an integer type (e.g. POSIX signed integer type off_t
) where it would be helpful to have a macro for its minimum and maximum val
I have used the following pattern to solve the problem (assuming there are no padding bits):
((((type) 1 << (number_of_bits_in_type - 1)) - 1) << 1) + 1
The number_of_bits_in_type
is derived as CHAR_BIT * sizeof (type)
as in the other answers.
We basically "nudge" the 1 bits into place, while avoiding the sign bit.
You can see how this works. Suppose that the width is 16 bits. Then we take 1 and shift it left by 16 - 2 = 14, producing the bit pattern 0100000000000000
. We carefully avoided shifting a 1
into the sign bit. Next, we subtract 1 from this, obtaining 0011111111111111
. See where this is going? We shift this left by 1 obtaining 0111111111111110
, again avoiding the sign bit. Finally we add 1, obtaining 0111111111111111
, which is the highest signed 16 bit value.
This should work fine on one's complement and sign-magnitude machines, if you work in a museum where they have such things. It doesn't work if you have padding bits. For that, probably all you can do is #ifdef
, or switch to alternative configuration mechanisms outside of the compiler and preprocessor.