Why is the maximum value of an unsigned n-bit integer 2^n-1 and not 2^n?

后端 未结 12 1193
萌比男神i
萌比男神i 2020-11-28 06:01

The maximum value of an n-bit integer is 2n-1. Why do we have the \"minus 1\"? Why isn\'t the maximum just 2n?

相关标签:
12条回答
  • 2020-11-28 06:36

    In most programming languages integer is a signed value (see two's complement).

    For example, in Java and .NET integer most left byte is reserved for sign:

    • 0 => positive or zero number
    • 1 => negative number

    Then the maximum value for 32-bit number is limited by 2^31. And adding -1 we get 2^31 - 1.

    Why does -1 appear?

    Look at more simple example with unsigned Byte (8-bits):

      1  1  1  1  1  1  1  1
    128 64 32 16  8  4  2  1  <-- the most right bit cannot represent 2
    --- --------------------
    128 + 127 = 255 
    

    As other guys pointed out the most right bit can have a maximum value of 1, not 2, because of 0/1 values.

    Int32.MaxValue = 2147483647 (.NET)
    
    0 讨论(0)
  • 2020-11-28 06:40

    If I ask you what is the biggest value you can fit into a 2-digit number, would you say it's 102 (100) or 102-1 (99)? Obviously the latter. It follows that if I ask you what the biggest n-digit number is, it would be 10n-1. But why is there the "-1"? Quite simply, because we can represent 0 in a 2-digit number also as 00 (but everyone just writes 0).

    Let's replace 10 with an arbitrary base, b. It follows that for a given base, b, the biggest n-digit number you can represent is bn-1. Using a 32-bit (n = 32) base-2 (b = 2) number, we see that the biggest value we can represent 232-1.


    Another way of thinking about it is to use smaller numbers. Say we have a 1-bit number. Would you tell me the biggest value it can represent is 21 or 21-1?

    0 讨论(0)
  • 2020-11-28 06:41

    2^32 in binary:

    1 00000000 00000000 00000000 00000000
    

    2^32 - 1 in binary:

    11111111 11111111 11111111 11111111
    

    As you can see, 2^32 takes 33 bits, whereas 2^32 - 1 is the maximum value of a 32 bit integer.

    The reason for the seemingly "off-by-one" error here is that the lowest bit represents a one, not a two. So the first bit is actually 2^0, the second bit is 2^1, etc...

    0 讨论(0)
  • 2020-11-28 06:41

    It's because in computing, numbers start at 0. So if you have, for example, 32 address lines (232 addressable bytes), they will be in the range [0, 2^32).

    0 讨论(0)
  • 2020-11-28 06:43

    The -1 is because integers start at 0, but our counting starts at 1.

    So, 2^32-1 is the maximum value for a 32-bit unsigned integer (32 binary digits). 2^32 is the number of possible values.

    To simplify why, look at decimal. 10^2-1 is the maximum value of a 2-digit decimal number (99). Because our intuitive human counting starts at 1, but integers are 0-based, 10^2 is the number of values (100).

    0 讨论(0)
  • 2020-11-28 06:43

    The numbers from 0 to N are not N. They are N+1. This is not obvious to the majority of people and as a result many programs have bugs because if this reason.

    0 讨论(0)
提交回复
热议问题