Why the range of signed character is -128
to 127
but not -127
to 128
?
In 8-bit 2's complement encoding numbers -128
and +128
have the same representation: 10000000
. So, the designer of the hardware is presented with an obvious dilemma: how to interpret bit-pattern 10000000
. Formally, it will work either way. If they decide to interpret it as +128
, the resultant range will be -127..+128
. If they decide to interpret it as -128
, the resultant range will be -128..+127
.
In actual real-life 2's complement representation the latter approach is chosen because it satisfies the following nice convention: all bit-patterns with 1
in higher-order bit represent negative numbers.
It is worth noting though, that language specification does not require 2's-complement implementations to treat the 100...0
bit pattern as a valid value in any signed integer type. E.g. implementations are allowed to restrict 8-bit signed char
to -127..+127
range and regard 10000000
as an invalid bit combination (trap representation).