What does the compiler at casting integer constants?

ⅰ亾dé卋堺 提交于 2019-12-23 21:06:53

问题


Using the following macro:

#define MIN_SWORD (signed int) 0x8000

In e.g. the following expression:

signed long s32;
if (s32 < (signed long)MIN_SWORD)...

is expected to do the following check:

if (s32 < -32768)

One some compilers it seems to work fine. But on some other compiler the exprssion is evaluated as:

if (s32 < 32768)

My question: How is a ANSI-C compiler supposed to evaluate the following expression: (signed long) (signed int) 0x8000?

It seems that on some compilers the cast to `(signed int) does not cause the (expected) conversion from the positive constant 0x8000 to the minimum negative value of a signed int, if afterwards the expression is casted to the wider type of signed long. In other words, the evaluated constant is not equivalent to: -32768L (but 32768L)

Is this behavior maybe undefined by ANSI-C?


回答1:


If an int is 16-bit on your platform, then the type of 0x8000 is unsigned int (see 6.4.4 p.5 of the standard). Converting to a signed int is implementation-defined if the value cannot be represented (see 6.3.1.3 p.3). So the behaviour of your code is implementation-defined.

Having said that, in practice, I would've assumed that this should always do what you "expect". What compiler is this?



来源:https://stackoverflow.com/questions/4934387/what-does-the-compiler-at-casting-integer-constants

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!