I had an assignment where i had the following code excerpt:
/*OOOOOHHHHH I\'ve just noticed instead of an int here should be an *short int* I will just left
The conversion from int
to short int
is implementation defined. The reason you get the result you do is that your implementation is just truncating your number:
decimal | binary
-----------+------------------------
511 | 1 1111 1111
512 | 10 0000 0000
511 * 512 | 11 1111 1110 0000 0000
Since you appear to have a 16-bit short int
type, that 11 1111 1110 0000 0000
becomes just 1111 1110 0000 0000
, which is the two's complement representation of -512
:
decimal | binary (x) | ~x | -x == ~x + 1
---------+---------------------+---------------------+---------------------
512 | 0000 0010 0000 0000 | 1111 1101 1111 1111 | 1111 1110 0000 0000