-1 can be represented in 4 bit binary as (2\'s complement) 1111
15 is also represented as 1111.
So, how does CPU differentiate between 15 and -1 when it get
Most of the previous answers mentioned separate opcodes. That might be true for more complicated operations like multiplication and division, but for simple addition and subtraction that is not how the CPU works.
The CPU keeps data about the result of an instruction in its flags register. On x86 (where I am most familiar) the two most important flags here are the "overflow" and "carry" flags.
Basically the CPU doesn't care if the number is signed or unsigned it treats them both the same. The carry flag is set when the number goes over the highest unsigned value it can contain. The overflow flag is set when it goes over or under the range of an unsigned number. If you are working with unsigned numbers you check the carry flag and ignore the overflow flag. If you are working with signed numbers you check the overflow flag and ignore the carry flag.
Here are some examples:
Unsigned:
1111 (15) + 1111 (15) = 1110 (14)
What you do now is check the carry flag, which in this case contains one giving the final result
1 1110 (30)
Signed:
1111 (-1) + 1111 (-1) = 1110 (-2)
In this case you ignore the carry flag, the overflow flag should be set to zero.
Unsigned:
0111 (7) + 0111 (7) = 1110 (14)
When you check the carry flag it should be zero.
Signed:
0111 (7) + 0111 (7) = 1110 (-2)
In this case the overflow flag would be set meaning that there was an error in the addition.
So in summary the number is only signed or unsigned based on your interpretation of it, the CPU gives you the tools nessecary to distinguish between them, but doesn't distinguish on its own.