How does a C compiler interpret the "L" which denotes a long integer literal, in light of automatic conversion? The following code, when run on a 32-bit platform (32-bit long, 64-bit long long), seems to cast the expression "(0xffffffffL)" into the 64-bit integer 4294967295, not 32-bit -1.
Sample code:
#include <stdio.h>
int main(void)
{
long long x = 10;
long long y = (0xffffffffL);
long long z = (long)(0xffffffffL);
printf("long long x == %lld\n", x);
printf("long long y == %lld\n", y);
printf("long long z == %lld\n", z);
printf("0xffffffffL == %ld\n", 0xffffffffL);
if (x > (long)(0xffffffffL))
printf("x > (long)(0xffffffffL)\n");
else
printf("x <= (long)(0xffffffffL)\n");
if (x > (0xffffffffL))
printf("x > (0xffffffffL)\n");
else
printf("x <= (0xffffffffL)\n");
return 0;
}
Output (compiled with GCC 4.5.3 on a 32-bit Debian):
long long x == 10
long long y == 4294967295
long long z == -1
0xffffffffL == -1
x > (long)(0xffffffffL)
x <= (0xffffffffL)
It's a hexadecimal literal, so its type can be unsigned. It fits in unsigned long, so that's the type it gets. See section 6.4.4.1 of the standard:
The type of an integer constant is the first of the corresponding list in which its value can be represented.
where the list for hexadecimal literals with a suffix L is
longunsigned longlong longunsigned long long
Since it doesn't fit in a 32-bit signed long, but an unsigned 32-bit unsigned long, that's what it becomes.
The thing is that the rules of determining the type of the integral literal are different depending on whether you have a decimal number or a hexadecimal(or octal number). A decimal literal is always signed unless postfixes with U. A hexadecimal or octal literal can also be unsigned if the signed type can not contain the value.
来源:https://stackoverflow.com/questions/15510151/c-interpretation-of-hexadecimal-long-integer-literal-l