Is the integer width relevant in bitfield declaration?

狂风中的少年 提交于 2019-12-12 18:54:56

问题


I was trying to find a reason why I should not write

struct bitfield {
  signed foo:4;
  unsigned bar:2;
};

instead of specifying verbosely

struct bitfield {
  signed int foo:4;   
  unsigned int bar:2; 
};

As the size of each member of the bitfield is explicitly specified after the colon, could there be any drawbacks?

Does it matter if I use char, short, long, long long? Must the number of specified bitfield bits probably always be smaller than the width of the declaration type?


Found some related questions:

  • Bit-fields of type other than int?
  • What is the use of declaring different datatypes inside bitfields?

The answers range from

  • don't use any other type than (signed/unsigned) int or _Bool and
  • _Bool, signed int, unsigned int, or some other implementation-defined type. (C99 6.2.7.1 (4) )

In this context: what may this unspecific some other implementation-defined type be like, and what other drawbacks may arise from my choice in this place?


回答1:


"Sometimes", and "Yes"

C99 requires that the width expression "not exceed the number of bits in an object of the type that is specified" so if too small a type is used, the code will either not compile or at least not be portable. See §6.7.2.1 (3).

Regarding the updated third question and the general, "exactly what are the consequences?" issue, the things that could be affected are: portability, alignment, and padding. The standard gives clear specifications only for the first. Without bitfields, it's usually possibly to arrange alignment and padding based on predicting what the compiler would do to generate optimally aligned values. Although it is not guaranteed, it seems like in some environments using something like short will save memory due to the reduced alignment and padding that result.

One possible approach to achieving the occasionally conflicting goals of exact layout and portability is to declare the in-memory data structures without bit fields, perhaps using <stdint.h> types. Then, if you want to use a bit field to decode something, assign the in-memory source object to a temporary variable that's a union of a bit field and a bit-specific type, or, deliberately violate the type punning rules by casting a pointer. (Linux does this all over the place.)

A better way is probably to just avoid bit fields.




回答2:


In both versions of your code, the width is explicit; it's the width of signed and unsigned int. signed is just an alias for int, and so is signed int. Likewise, unsigned is an alias for unsigned int. Lone signed and unsigned are not modifiers but type names in themselves.



来源:https://stackoverflow.com/questions/10271265/is-the-integer-width-relevant-in-bitfield-declaration

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!