Difference between uint and unsigned int?

陌路散爱 提交于 2020-05-24 14:30:33

问题


Is there any difference between uint and unsigned int? I'm looking in the site, but all question refers to C# or C++. I'd like to have an answer concerning the C language.

If it is relevant, note that I'm using GCC under Linux.


回答1:


uint isn't a standard type - unsigned int is.




回答2:


Some systems may define uint as a typedef.

typedef unsigned int uint;

For these systems they are same. But uint is not a standard type, so every system may not support it and thus it is not portable.




回答3:


I am extending a bit answers by Erik, Teoman Soygul and taskinoor

uint is not a standard.

Hence using your own shorthand like this is discouraged:

typedef unsigned int uint;

If you look for platform specificity instead (e.g. you need to specify the number of bits your int occupy), including stdint.h:

#include <stdint.h>

will expose the following standard categories of integers:

  • Integer types having certain exact widths

  • Integer types having at least certain specified widths

  • Fastest integer types having at least certain specified widths

  • Integer types wide enough to hold pointers to objects

  • Integer types having greatest width

For instance,

Exact-width integer types

The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two's-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

The typedef name uint N _t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.

defines

int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t



回答4:


All of the answers here fail to mention the real reason for uint.
It's obviously a typedef of unsigned int, but that doesn't explain its usefulness.

The real question is,

Why would someone want to typedef a fundamental type to an abbreviated version?

To save on typing?
No, they did it out of necessity.

Consider the C language; a language that does not have templates.
How would you go about stamping out your own vector that can hold any type?

You could do something with void pointers,
but a closer emulation of templates would have you resorting to macros.

So you would define your template vector:

#define define_vector(type) \
  typedef struct vector_##type { \
    impl \
  };

Declare your types:

define_vector(int)
define_vector(float)
define_vector(unsigned int)

And upon generation, realize that the types ought to be a single token:

typedef struct vector_int { impl };
typedef struct vector_float { impl };
typedef struct vector_unsigned int { impl };



回答5:


The unsigned int is a built in (standard) type so if you want your project to be cross-platform, always use unsigned int as it is guarantied to be supported by all compilers (hence being the standard).



来源:https://stackoverflow.com/questions/5678049/difference-between-uint-and-unsigned-int

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!