int:
The 32-bit int data type can hold integer values in the range of −2,147,483,648 to 2,147,483,647. You may also refer to this data type as sig
The internal representation of int
and unsigned int
is the same.
Therefore, when you pass the same format string to printf
it will be printed as the same.
However, there are differences when you compare them. Consider:
int x = 0x7FFFFFFF;
int y = 0xFFFFFFFF;
x < y // false
x > y // true
(unsigned int) x < (unsigned int y) // true
(unsigned int) x > (unsigned int y) // false
This can be also a caveat, because when comparing signed and unsigned integer one of them will be implicitly casted to match the types.
Hehe. You have an implicit cast here, because you're telling printf
what type to expect.
Try this on for size instead:
unsigned int x = 0xFFFFFFFF;
int y = 0xFFFFFFFF;
if (x < 0)
printf("one\n");
else
printf("two\n");
if (y < 0)
printf("three\n");
else
printf("four\n");
There is no difference between the two in how they are stored in memory and registers, there is no signed and unsigned version of int registers there is no signed info stored with the int, the difference only becomes relevant when you perform maths operations, there are signed and unsigned version of the maths ops built into the CPU and the signedness tell the compiler which version to use.