Can someone clarify what happens when an integer is cast to a short in C? I\'m using Raspberry Pi, so I\'m aware that an int is 32 bits, and theref
Perhaps let the code speak for itself:
#include
#define BYTETOBINARYPATTERN "%d%d%d%d%d%d%d%d"
#define BYTETOBINARY(byte) \
((byte) & 0x80 ? 1 : 0), \
((byte) & 0x40 ? 1 : 0), \
((byte) & 0x20 ? 1 : 0), \
((byte) & 0x10 ? 1 : 0), \
((byte) & 0x08 ? 1 : 0), \
((byte) & 0x04 ? 1 : 0), \
((byte) & 0x02 ? 1 : 0), \
((byte) & 0x01 ? 1 : 0)
int main()
{
int x = 0x1248642;
short sx = (short) x;
int y = sx;
printf("%d\n", x);
printf("%hu\n", sx);
printf("%d\n", y);
printf("x: "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN"\n",
BYTETOBINARY(x>>24), BYTETOBINARY(x>>16), BYTETOBINARY(x>>8), BYTETOBINARY(x));
printf("sx: "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN"\n",
BYTETOBINARY(y>>8), BYTETOBINARY(y));
printf("y: "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN" "BYTETOBINARYPATTERN"\n",
BYTETOBINARY(y>>24), BYTETOBINARY(y>>16), BYTETOBINARY(y>>8), BYTETOBINARY(y));
return 0;
}
Output:
19170882
34370
-31166
x: 00000001 00100100 10000110 01000010
sx: 10000110 01000010
y: 11111111 11111111 10000110 01000010
As you can see, int -> short yields the lower 16 bits, as expected.
Casting short to int yields the short with the 16 high bits set. However, I suspect this is implementation specific and undefined behavior. You're essentially interpreting 16 bits of memory as an integer, which reads 16 extra bits of whatever rubbish happens to be there (or 1's if the compiler is nice and wants to help you find bugs quicker).
I think it should be safe to do the following:
int y = 0x0000FFFF & sx;
Obviously you won't get back the lost bits, but this will guarantee that the high bits are properly zeroed.
If anyone can verify the short -> int high bit behavior with an authoritative reference, that would be appreciated.
Note: Binary macro adapted from this answer.