The output comes to be the 32-bit 2\'s complement of 128 that is 4294967168. How?
#include
int main()
{
char a;
a=128;
if(a==-128
The sequence of steps that got you there is something like this:
char.char is signed char and has a maximum value of 127, so 128 overflows.0x80. It uses two’s-complement math, so (int8_t)0x80 represents (int8_t)-128.int to int in many contexts, including variadic arguments to functions such as printf(), which aren’t bound to a prototype and still use the old argument-promotion rules of K&R C instead.int is 32 bits wide and also two’s-complement, so (int)-128 sign-extends to 0xFFFFFF80.printf("%u", x), the runtime interprets the int argument as an unsigned int.0xFFFFFF80 represents 4,294,967,168."%u\n" format specifier prints this out without commas (or other separators) followed by a newline.This is all legal, but so are many other possible results. The code is buggy and not portable.
Make sure you don’t overflow the range of your type! (Or if that’s unavoidable, overflow for unsigned scalars is defined as modular arithmetic, so it’s better-behaved.) The workaround here is to use unsigned char, which has a range from 0 to (at least) 255, instead of char.