I see u8 u16 u32 u64 data types being used in kernel code. And I am wondering why is there need to use u8 or u16 or u32 or u64 and not unsigned int?
Often when working close to the hardware or when trying to control the size/format of a data structure you need to have precise control of the size of your integers.
As for u8 vs uint8_t, this is simply because Linux predated <stdint.h> being available in C, which is technically a C99-ism, but in my experience is available on most modern compilers even in their ANSI-C / C89 modes.
Adding my 10 cents to this answer:
u64 means an 'unsigned 64 bits' value, so, depending on the architecture where the code will run/be compiled, it must be defined differently in order to really be 64 bits long.
For instance, on a x86 machine, an unsigned long is 64 bits long, so u64 for that machine could be defined as follows:
typedef unsigned long u64;
The same applies for u32. On a x86 machine, unsigned int is 32 bits long, so u32 for that machine could be defined as follows:
typedef unsigned int u32;
You'll generally find the typedef declaration for these types on a types.h file which corresponds to the architecture you're compiling your source to.
来源:https://stackoverflow.com/questions/30896489/why-is-u8-u16-u32-u64-used-instead-of-unsigned-int-in-kernel-programming