Let\'s say I have a variable i that comes from external sources:
int i = get_i();
Assuming i is INT_MIN
Platforms may choose to define the behavior, but the C Standard does not require that they guarantee anything about it. While historically microcomputer compilers have relatively consistently behaved as though -INT_MIN would yield INT_MIN or in some cases a number that behaves like a value one larger than INT_MAX, it has become more fashionable to instead have it retroactively change the value of whatever was being negated. Thus, given:
int wowzers(int x)
{
if (x != INT_MIN) printf("Not int min!");
return -x;
}
a hyper-modern compiler may use the expression -x to determine that x can't have been equal to INT_MIN when the previous comparison was performed, and may thus perform the printf unconditionally.
Incidentally, gcc 8.2 will use the UB-ness of negating INT_MIN to "optimize" the following
int qq,rr;
void test(unsigned short q)
{
for (int i=0; i<=q; i++)
{
qq=-2147483647-i;
rr=qq;
rr=-rr;
}
}
into code that unconditionally stores -2147483647 to qq and 2147483647 to rr. Removing the rr=-rr line will make the code store -2147483647 or -2147483648 into both qq and rr, depending upon whether q is zero.