assigning 128 to char variable in c

前端 未结 3 1721
不知归路
不知归路 2021-01-16 19:21

The output comes to be the 32-bit 2\'s complement of 128 that is 4294967168. How?

#include 
int main()
{
    char a;
    a=128;
    if(a==-128         


        
3条回答
  •  抹茶落季
    2021-01-16 19:52

    The sequence of steps that got you there is something like this:

    1. You assign 128 to a char.
    2. On your implementation, char is signed char and has a maximum value of 127, so 128 overflows.
    3. Your implementation interprets 128 as 0x80. It uses two’s-complement math, so (int8_t)0x80 represents (int8_t)-128.
    4. For historical reasons (relating to the instruction sets of the DEC PDP minicomputers on which C was originally developed), C promotes signed types shorter than int to int in many contexts, including variadic arguments to functions such as printf(), which aren’t bound to a prototype and still use the old argument-promotion rules of K&R C instead.
    5. On your implementation, int is 32 bits wide and also two’s-complement, so (int)-128 sign-extends to 0xFFFFFF80.
    6. When you make a call like printf("%u", x), the runtime interprets the int argument as an unsigned int.
    7. As an unsigned 32-bit integer, 0xFFFFFF80 represents 4,294,967,168.
    8. The "%u\n" format specifier prints this out without commas (or other separators) followed by a newline.

    This is all legal, but so are many other possible results. The code is buggy and not portable.

    Make sure you don’t overflow the range of your type! (Or if that’s unavoidable, overflow for unsigned scalars is defined as modular arithmetic, so it’s better-behaved.) The workaround here is to use unsigned char, which has a range from 0 to (at least) 255, instead of char.

提交回复
热议问题