You can predict signed int overflow
but attempting to detect it after the summation is too late. You have to test for possible overflow before you do a signed addition.
It's not possible to avoid undefined behaviour by testing for it after the summation. If the addition overflows then there is already undefined behaviour.
If it were me, I'd do something like this:
#include <limits.h>
int safe_add(int a, int b)
{
if (a >= 0) {
if (b > (INT_MAX - a)) {
/* handle overflow */
}
} else {
if (b < (INT_MIN - a)) {
/* handle underflow */
}
}
return a + b;
}
Refer this paper for more information. You can also find why unsigned integer overflow is not undefined behaviour and what could be portability issues in the same paper.
EDIT:
GCC and other compilers have some provisions to detect the overflow. For example, GCC
has following built-in functions allow performing simple arithmetic operations together with checking whether the operations overflowed.
bool __builtin_add_overflow (type1 a, type2 b, type3 *res)
bool __builtin_sadd_overflow (int a, int b, int *res)
bool __builtin_saddl_overflow (long int a, long int b, long int *res)
bool __builtin_saddll_overflow (long long int a, long long int b, long long int *res)
bool __builtin_uadd_overflow (unsigned int a, unsigned int b, unsigned int *res)
bool __builtin_uaddl_overflow (unsigned long int a, unsigned long int b, unsigned long int *res)
bool __builtin_uaddll_overflow (unsigned long long int a, unsigned long long int b, unsigned long long int *res)
Visit this link.
EDIT:
Regarding the question asked by someone
I think, it would be nice and informative to explain why signed int overflow undefined, whereas unsigned apperantly isn't..
The answer depends upon the implementation of the compiler. Most C implementations (compilers) just used whatever overflow behaviour was easiest to implement with the integer representation it used.
In practice, the representations for signed values may differ (according to the implementation): one's complement
, two's complement
, sign-magnitude
. For an unsigned type there is no reason for the standard to allow variation because there is only one obvious binary representation
(the standard only allows binary representation).