I have:
int a = 2147483647;
short b = (short)a;
and I get b = -1
whereas I expect int32
to be converted to
Converting a value to a signed type, when the source value doesn't fit in the target type, yield an implementation-defined result. That means that any conforming compiler's documentation must document what that result is.
(This is unlike the behavior on overflow of an arithmetic operator. For example:
int overflow = INT_MAX + 1;
actually has undefined behavior. But in either case, you should be careful to write your code so it doesn't trigger this kind of problem.)
For many implementations, for both conversion and arithmetic, an overflow where the target is an N-bit type simply takes the N low-order bits of the correct result.
In your case, apparently int
is 32 bits and short
is 16 bits (those sizes can vary on different implementations). 2147483647
is 0x7fffffff
, the low-order 16 bits are 0xffff
, which is (again, on your implementation) the representation of -1
in type short
.
For conversion to unsigned types, the result is strictly defined by the standard; it takes the low-order N bits of the result. And for overflowing floating-point conversion (say, converting a very large double
value to float
), the behavior is undefined.
So far, this is all the same for C and C++. But just to add to the confusion, starting with the 1999 standard an overflowing signed conversion is permitted to raise an implementation-defined signal. C++ doesn't have this. I don't know of any compiler that actually does this.
I expect to see some value and not
-1
.
-1
is "some value". Was there some specific value you expected?
Incidentally:
short b = (short)a;
The cast is unnecessary. Assignment, initialization, parameter passing, and return
statements can assign values between any numeric types without a cast. The value is converted implicitly:
short b = a;