Truncating an int to char - is it defined?

前端 未结 5 649
说谎
说谎 2020-12-19 11:04
unsigned char a, b;
b = something();
a = ~b;

A static analyzer complained of truncation in the last line, presumably because b is prom

5条回答
  •  轻奢々
    轻奢々 (楼主)
    2020-12-19 11:56

    The C standard specifies this for unsigned types:

    A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.

    In this case, if your unsigned char is 8 bits, it means that the result will be reduced modulo 256, which means that if b was 0x55, a will indeed end up as 0xAA.

    But note that if unsigned char is wider than 8 bits (which is perfectly legal), you will get a different result. To ensure that you will portably get 0xAA as the result, you can use:

    a = ~b & 0xff;
    

    (The bitwise and should be optimised out on platforms where unsigned char is 8 bits).

    Note also that if you use a signed type, the result is implementation-defined.

提交回复
热议问题