why is char's sign-ness not defined in C?

≯℡__Kan透↙ 提交于 2019-11-28 00:40:59

"Plain" char having unspecified signed-ness allows compilers to select whichever representation is more efficient for the target architecture: on some architectures, zero extending a one-byte value to the size of "int" requires less operations (thus making plain char 'unsigned'), while on others the instruction set makes sign-extending more natural, and plain char gets implemented as signed.

Perhaps historically some implementations' "char" were signed and some were unsigned, and so to be compatible with both they couldn't define it as one or the other.

in those good old days C was defined, the character world was 7bit, so the sign-bit could be used for other things (like EOF)

On some machines, a signed char would be too small to hold all the characters in the C character set (letters, digits, standard punctuation, etc.) On such machines, 'char' must be unsigned. On other machines, an unsigned char can hold values larger than a signed int (since char and int are the same size). On those machines, 'char' must be signed.

I suppose (out of the top of my head) that their thinking was along the following lines:

If you care about the sign of char (using it as a byte) you should explicitly choose signed or unsigned char.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!