What type-conversions are happening?

会有一股神秘感。 提交于 2019-12-10 11:05:13

问题


#include "stdio.h"

int main()
{
    int x = -13701;
    unsigned int y = 3;
    signed short z = x / y;

    printf("z = %d\n", z);

    return 0;
}

I would expect the answer to be -4567. I am getting "z = 17278". Why does a promotion of these numbers result in 17278?

I executed this in Code Pad.


回答1:


The hidden type conversions are:

signed short z = (signed short) (((unsigned int) x) / y);

When you mix signed and unsigned types the unsigned ones win. x is converted to unsigned int, divided by 3, and then that result is down-converted to (signed) short. With 32-bit integers:

(unsigned) -13701         == (unsigned) 0xFFFFCA7B // Bit pattern
(unsigned) 0xFFFFCA7B     == (unsigned) 4294953595 // Re-interpret as unsigned
(unsigned) 4294953595 / 3 == (unsigned) 1431651198 // Divide by 3
(unsigned) 1431651198     == (unsigned) 0x5555437E // Bit pattern of that result
(short) 0x5555437E        == (short) 0x437E        // Strip high 16 bits
(short) 0x437E            == (short) 17278         // Re-interpret as short

By the way, the signed keyword is unnecessary. signed short is a longer way of saying short. The only type that needs an explicit signed is char. char can be signed or unsigned depending on the platform; all other types are always signed by default.




回答2:


Short answer: the division first promotes x to unsigned. Only then the result is cast back to a signed short.

Long answer: read this SO thread.




回答3:


The problems comes from the unsigned int y. Indeed, x/y becomes unsigned. It works with :

#include "stdio.h"

int main()
{
    int x = -13701;
    signed int y = 3;
    signed short z = x / y;

    printf("z = %d\n", z);

    return 0;
}



回答4:


Every time you mix "large" signed and unsigned values in additive and multiplicative arithmetic operations, unsigned type "wins" and the evaluation is performed in the domain of the unsigned type ("large" means int and larger). If your original signed value was negative, it first will be converted to positive unsigned value in accordance with the rules of signed-to-unsigned conversions. In your case -13701 will turn into UINT_MAX + 1 - 13701 and the result will be used as the dividend.

Note that the result of signed-to-unsigned conversion on a typical 32-bit int platform will result in unsigned value 4294953595. After division by 3 you'll get 1431651198. This value is too large to be forced into a short object on a platform with 16-bit short type. An attempt to do that results in implementation-defined behavior. So, if the properties of your platform are the same as in my assumptions, then your code produces implementation-defined behavior. Formally speaking, the "meaningless" 17278 value you are getting is nothing more than a specific manifestation of that implementation-defined behavior. It is possible, that if you compiled your code with overflow checking enabled (if your compiler supports them), it would trap on the assignment.



来源:https://stackoverflow.com/questions/3779901/what-type-conversions-are-happening

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!