问题
After reading the 32 bit unsigned multiply on 64 bit causing undefined behavior? question here on StackOverflow, I began to ponder whether typical arithmetic operations on small unsigned types could lead to undefined behavior according to the C99 standard.
For example, take the following code:
#include <limits.h>
...
unsigned char x = UCHAR_MAX;
unsigned char y = x + 1;
The x variable is initialized to the maximum magnitude for the unsigned char data type. The next line is the issue: the value x + 1 is greater than UCHAR_MAX and cannot be stored in the unsigned char variable y.
I believe the following is what actually occurs.
- The variable
xis first promoted to data typeint(section 6.3.1.1/2), thenx + 1is evaluated as data typeint.
Suppose there is an implementation where INT_MAX and UCHAR_MAX are the same -- x + 1 would result in a signed integer overflow. Does this mean that incrementing the variable x, despite being an unsigned integer type, can lead to undefined behavior due to a possible signed integer overflow?
回答1:
By my reading of the standard, an implementation which used a 15-bit char could legally store int as a 15-bit magnitude and use a second 15-bit word to store the sign along with 14 bits of padding; in that case, an unsigned char would hold values 0 to 32,767 and an int would hold values from -32,767 to +32,767. Adding 1 to (unsigned char)32767 would indeed be undefined behavior. A similar situation could arise with any larger char size if 32,767 was replaced with UCHAR_MAX.
Such a situation is unlikely, however, compared with the real-world problems associated with unsigned integer multiplication alluded to in the other post.
来源:https://stackoverflow.com/questions/27004694/can-unsigned-integer-incrementation-lead-to-undefined-defined-behavior