What defines the size of a type?

落爺英雄遲暮 提交于 2021-01-27 05:49:44

问题


The ISO C standard says that:

sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

I am using GCC-8 on BIT Linux mint (19.1) and the size of long int is 8.

I am using an app which uses GCC 7 and the compiler is 64-bit. The size of long int is 4. Does the compiler or the operating system define the size of a long int?


回答1:


The compiler calls all the shots. The operating system just runs the resulting binary.

That being said, the compiler will normally make an executable the operating system can use, so there's some interplay here. Since things like the size of int don't really matter so long as they're consistent, you will see variation.

In other words, if the kernel expects long int to be 8 bytes because of how it was compiled, then you'll want to compile that way to match or your compiled code won't match and none of the shared libraries will work.




回答2:


The Application Binary Interface for an operating system/architecture specifies the sizes of basic types:

ABIs cover details such as (bolding mine):

  • a processor instruction set (with details like register file structure, stack organization, memory access types, ...)
  • the sizes, layouts, and alignments of basic data types that the processor can directly access
  • the calling convention, which controls how functions' arguments are passed and return values are retrieved; for example, whether all parameters are passed on the stack or some are passed in registers, which registers are used for which function parameters, and whether the first function parameter passed on the stack is pushed first or last onto the stack
  • how an application should make system calls to the operating system and, if the ABI specifies direct system calls rather than procedure calls to system call stubs, the system call numbers
  • and in the case of a complete operating system ABI, the binary format of object files, program libraries and so on.



回答3:


This is left to the discretion of the implementation.

It's the implementation (compiler and standard library) that defines the size of long, int, and all other types.

As long as they fit the constraints given by the standard, the implementation can make all the decisions as to what sizes the types are (possibly with the exception of pointers).




回答4:


TL/DR - the exact size is up the compiler.


The Standard requires that a type be able to represent a minimum range of values - for example, an unsigned char must be able to represent at least the range [0..255], an int must be able to represent at least the range [-32767...32767], etc.

That minimum range defines a minimum number of bits - you need at least 16 bits to represent the range [-32767..32767] (some systems may use padding bits or parity bits that are part of the word, but not used to represent the value).

Other architectural considerations come into play - int is usually set to be the same size as the native word size. So on a 16-bit system, int would (usually) be 16 bits, while on a 32-bit system it would be 32 bits. So, ultimately, it comes down the the compiler.

However, it's possible to have one compiler on a 32-bit system use a 16-bit int, while another uses a 32-bit int. That led to a wasted afternoon back in the mid-90s where I had written some code that assumed a 32-bit int that worked fine under one compiler but broke the world under a different compiler on the same hardware.

So, lesson learned - never assume that a type can represent values outside of the minimum guaranteed by the Standard. Either check against the contents of limits.h and float.h to see if the type is big enough, or use one of the sized types from stdint.h (int32_t, uint8_t, etc.).



来源:https://stackoverflow.com/questions/56156513/what-defines-the-size-of-a-type

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!