I am developing a program for an STM32Fx cortex-M3 series processor. In stdint.h the following are defined:
typedef unsigned int uint_fast32_t; typedef uint32_t uint_least32_t; typedef unsigned long uint32_t;
As I understand it.
[u]int_fast[n]_t will give you the fastest data type of at least n bits. [u]int_least[n]_t will give you the smallest data type of at least n bits. [u]int[n]_t will give you the data type of exactly n bits.
Also as far as i know sizeof(unsigned int) <= sizeof(unsigned long) and UINT_MAX <= ULONG_MAX - always.
Thus I would expect uint_fast32_t to be a data type with a size equal to or greater than the size of uint32_t.
In the case of the cortex-M3 sizeof(unsigned int) == sizeof(unsigned long) == 4. So the above definitions are 'correct' in terms of size.
But why are they not defined in a way that is consistent with the names and logical sizes of the underlying data types i.e.
typedef unsigned long uint_fast32_t; typedef unsigned int uint_least32_t; typedef uint_fast32_t uint32_t;
Can someone please clarify the selection of the underlying types?
Given that 'long' and 'int' are the same size, why not use the same data type for all three definitions?
typedef unsigned int uint_fast32_t; typedef unsigned int uint_least32_t; typedef unsigned int uint32_t;