why is u8 u16 u32 u64 used instead of unsigned int in kernel programming

霸气de小男生 提交于 2020-01-02 00:27:05

问题


I see u8 u16 u32 u64 data types being used in kernel code. And I am wondering why is there need to use u8 or u16 or u32 or u64 and not unsigned int?


回答1:


Often when working close to the hardware or when trying to control the size/format of a data structure you need to have precise control of the size of your integers.

As for u8 vs uint8_t, this is simply because Linux predated <stdint.h> being available in C, which is technically a C99-ism, but in my experience is available on most modern compilers even in their ANSI-C / C89 modes.




回答2:


Adding my 10 cents to this answer:

u64 means an 'unsigned 64 bits' value, so, depending on the architecture where the code will run/be compiled, it must be defined differently in order to really be 64 bits long.

For instance, on a x86 machine, an unsigned long is 64 bits long, so u64 for that machine could be defined as follows:

typedef unsigned long u64;

The same applies for u32. On a x86 machine, unsigned int is 32 bits long, so u32 for that machine could be defined as follows:

typedef unsigned int u32;

You'll generally find the typedef declaration for these types on a types.h file which corresponds to the architecture you're compiling your source to.



来源:https://stackoverflow.com/questions/30896489/why-is-u8-u16-u32-u64-used-instead-of-unsigned-int-in-kernel-programming

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!