The maximum value of an n-bit integer is 2n-1. Why do we have the \"minus 1\"? Why isn\'t the maximum just 2n?
If you're just starting out with programming, I suggest you take a look at this wiki article on signed number representations
As Vicente has stated, the reason you subtract 1 is because 0 is also an included number. As a simple example, with 3 bits, you can represent the following non-negative integers
0 : 000
1 : 001
2 : 010
3 : 011
4 : 100
5 : 101
6 : 110
7 : 111
Anything beyond that requires more than 3 digits. Hence, the max number you can represent is 2^3-1=7. Thus, you can extend this to any n and say that you can express integers in the range [0,2^n -1]. Now you can go read that article and understand the different forms, and representing negative integers, etc.