What's the difference between “int” and “int_fast16_t”?

前端 未结 7 1806
灰色年华
灰色年华 2020-12-05 02:18

As I understand it, the C specification says that type int is supposed to be the most efficient type on target platform that contains at least 16 bits.

I

7条回答
  •  不知归路
    2020-12-05 02:39

    From the C99 rationale 7.8 Format conversion of integer types (document that accompanies with Standard), emphasis mine:

    C89 specifies that the language should support four signed and unsigned integer data types, char, short, int and long, but places very little requirement on their size other than that int and short be at least 16 bits and long be at least as long as int and not smaller than 32 bits. For 16-bit systems, most implementations assign 8, 16, 16 and 32 bits to char, short, int, and long, respectively. For 32-bit systems, the common practice is to assign 8, 16, 32 and 32 bits to these types. This difference in int size can create some problems for users who migrate from one system to another which assigns different sizes to integer types, because Standard C’s integer promotion rule can produce silent changes unexpectedly. The need for defining an extended integer type increased with the introduction of 64-bit systems.

    The purpose of is to provide a set of integer types whose definitions are consistent across machines and independent of operating systems and other implementation idiosyncrasies. It defines, via typedef, integer types of various sizes. Implementations are free to typedef them as Standard C integer types or extensions that they support. Consistent use of this header will greatly increase the portability of a user’s program across platforms.

    The main difference between int and int_fast16_t is that the latter is likely to be free of these "implementation idiosyncrasies". You may think of it as something like:

    I don't care about current OS/implementation "politics" of int size. Just give me whatever the fastest signed integer type with at least 16 bits is.

提交回复
热议问题