The maximum value of an n
-bit integer is 2n-1. Why do we have the \"minus 1\"? Why isn\'t the maximum just 2n?
232 in binary is one followed by 32 zeroes, for a total of 33 bits. That doesn't fit in a 32-bit int value.
In most programming languages, 0
is a number too.
If you're just starting out with programming, I suggest you take a look at this wiki article on signed number representations
As Vicente has stated, the reason you subtract 1 is because 0
is also an included number. As a simple example, with 3 bits, you can represent the following non-negative integers
0 : 000
1 : 001
2 : 010
3 : 011
4 : 100
5 : 101
6 : 110
7 : 111
Anything beyond that requires more than 3 digits. Hence, the max number you can represent is 2^3-1=7. Thus, you can extend this to any n
and say that you can express integers in the range [0,2^n -1]
. Now you can go read that article and understand the different forms, and representing negative integers, etc.
Because 0 is also represented. The amount of numbers you can represent is indeed 2^n with n bits, but the maximum number is 2^n-1 because you have to start the count in 0, that is, every bit set to 0.
For 1 bit: 0, 1
For 2 bits: 0, 1, 2, 3
For 3 bits: 0, 1, 2, 3, 4, 5, 6, 7
And so on.
In the field of computing we start counting from 0.
Why do we have the "minus 1"?
Just answer the question: What is the maximum value of an 1-bit integer?
One bit integer can store only two (21) values: 0
and 1
. Last value is 12 = 110
Two bit integer can store only four (22) values: 00
, 01
, 10
and 11
. Last value is 112 = 310
Thus, when integer can stores N
, values last value will be N-1
because counting starts from zero.
n
bit integer can store 2
n
values. Where last will be 2
n-1
Example:
One byte can store 2
8 (256) values. Where first is 0
and last is 255
Why isn't the maximum just 2n?
Because counting starts from zero. Look at first value for any n
bit integer.
For example byte: 00000000
This would be very confusing if:
00000001
means 2
00000000
means 1
would not? ;-)