Nice question. As others said, numbers by default are int, so your operation for a acts on two ints and overflows. I tried to reproduce this, and extend a bit to cast the number into long long variable and then add the 1 to it, as the c example below:
$ cat test.c
#include
#include
#include
void main() {
long long a, b, c;
a = 2147483647 + 1;
b = 2147483648;
c = 2147483647;
c = c + 1;
printf("%lld\n", a);
printf("%lld\n", b);
printf("%lld\n", c);
}
The compiler does warn about overflow BTW, and normally you should compile production code with -Werror -Wall to avoid mishaps like this:
$ gcc -m64 test.c -o test
test.c: In function 'main':
test.c:8:16: warning: integer overflow in expression [-Woverflow]
a = 2147483647 + 1;
^
Finally, the test results are as expected (int overflow in first case, long long int's in second and third):
$ ./test
-2147483648
2147483648
2147483648
Another gcc version warns even further:
test.c: In function ‘main’:
test.c:8:16: warning: integer overflow in expression [-Woverflow]
a = 2147483647 + 1;
^
test.c:9:1: warning: this decimal constant is unsigned only in ISO C90
b = 2147483648;
^
Note also that technically int and long and variations of that are architecture-dependent, so their bit length can vary.
For predictably sized types you can be better off with int64_t, uint32_t and so on that are commonly defined in modern compilers and system headers, so whatever bitness your application is built for, the data types remain predictable. Note also the printing and scanning of such values is compounded by macros like PRIu64 etc.