stdint

Difference between C-Types int32_t int_least32_t etc

牧云@^-^@ 提交于 2019-12-07 11:33:23
问题 I have ever read that int32_t is exact 32 bits long and int_least32_t only at least 32 bits, but they have both the same typedefs in my stdint.h: typedef int int_least32_t; and typedef int int32_t; So where is the difference? They exactly the same... 回答1: int32_t is signed integer type with width of exactly 32 bits with no padding bits and using 2's complement for negative values. int_least32_t is smallest signed integer type with width of at least 32 bits. These are provided only if the

using stdint with swig and numpy.i

删除回忆录丶 提交于 2019-12-05 19:05:57
I'm developing a module for using c inline in Python code based on swig . For that I would like to make numpy arrays accessible in C . Until now I used C types like unsigned short but I would like to use types like uint16_t from stdint.h to be save whatever compiler my module encounters. Unfortunately the c++ -functions do not get wrapped correctly when using stdint.h types. The Error given is: _setc() takes exactly 2 arguments (1 given) . That means, the function is not wrapped to accept numpy arrays. The error does not occur, when I use e.g. unsigned short . Do you have any ideas, how I can

Difference between C-Types int32_t int_least32_t etc

此生再无相见时 提交于 2019-12-05 16:01:49
I have ever read that int32_t is exact 32 bits long and int_least32_t only at least 32 bits, but they have both the same typedefs in my stdint.h: typedef int int_least32_t; and typedef int int32_t; So where is the difference? They exactly the same... int32_t is signed integer type with width of exactly 32 bits with no padding bits and using 2's complement for negative values. int_least32_t is smallest signed integer type with width of at least 32 bits. These are provided only if the implementation directly supports the type. The typedefs that you are seeing simply means that in your

Where is ptrdiff_t defined in C?

邮差的信 提交于 2019-12-03 08:04:22
问题 Where is ptrdiff_t defined in C? If non-trivial, how can I make this type visible from GCC on Linux? 回答1: It's defined in stddef.h . That header defines the integral types size_t , ptrdiff_t , and wchar_t , the functional macro offsetof , and the constant macro NULL . 回答2: It is defined by the POSIX standard: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/stddef.h.html Where the type is exactly may be implemetation-specific, but interface is stddef.h 回答3: Since @Good Person said

Where is ptrdiff_t defined in C?

一笑奈何 提交于 2019-12-02 23:15:37
Where is ptrdiff_t defined in C? If non-trivial, how can I make this type visible from GCC on Linux? It's defined in stddef.h . That header defines the integral types size_t , ptrdiff_t , and wchar_t , the functional macro offsetof , and the constant macro NULL . It is defined by the POSIX standard: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/stddef.h.html Where the type is exactly may be implemetation-specific, but interface is stddef.h Since @Good Person said this wasn't specific to Linux, in Microsoft Visual Studio, ptrdiff_t is defined in: C:\Program Files (x86)\Microsoft

uint32_t vs uint_fast32_t vs uint_least32_t

走远了吗. 提交于 2019-12-02 20:34:59
I saw different types of definition of an integer in stdint.h . I'll take unsigned 32-bit integer as an example. uint32_t means clearly an unsigned integer of 32 bits. That's the one I always use. uint_fast32_t and uint_least32_t : What's the difference with uint32_t and when should I use them instead of uint32_t ? And now, I saw uintX_t where X is 24, 40, 48 and 56. It happens in my code that I have to work with 48 and 56-bit integers. As an example, I suppose uint24_t is define as something like this : struct uint24_t { unsigned int the_integer : 24; }; Am I right ? And, Will you suggest me

C: uint16_t subtraction behavior in gcc

霸气de小男生 提交于 2019-12-01 11:21:56
问题 I'm trying to subtract two unsigned ints and compare the result to a signed int (or a literal). When using unsigned int types the behavior is as expected. When using uint16_t (from stdint.h ) types the behavior is not what I would expect. The comparison was done using gcc 4.5. Given the following code: unsigned int a; unsigned int b; a = 5; b = 20; printf("%u\n", (a-b) < 10); The output is 0, which is what I expected. Both a and b are unsigned, and b is larger than a, so the result is a large

Fastest integer type for common architectures

好久不见. 提交于 2019-11-30 06:49:28
The stdint.h header lacks an int_fastest_t and uint_fastest_t to correspond with the {,u}int_fastX_t types. For instances where the width of the integer type does not matter, how does one pick the integer type that allows processing the greatest quantity of bits with the least penalty to performance? For example, if one was searching for the first set bit in a buffer using a naive approach, a loop such as this might be considered: // return the bit offset of the first 1 bit size_t find_first_bit_set(void const *const buf) { uint_fastest_t const *p = buf; // use the fastest type for comparison

Why Microsoft Visual Studio cannot find <stdint.h>? [duplicate]

牧云@^-^@ 提交于 2019-11-29 09:46:29
Possible Duplicate: Visual Studio support for new C / C++ standards? See the text below from wiki : The C99 standard includes definitions of several new integer types to enhance the portability of programs[2]. The already available basic integer types were deemed insufficient, because their actual sizes are implementation defined and may vary across different systems. The new types are especially useful in embedded environments where hardware supports usually only several types and that support varies from system to system. All new types are defined in inttypes.h header (cinttypes header in C+

Define 16 bit integer in C

余生长醉 提交于 2019-11-29 00:59:48
问题 I need to declare an integer in the size of 16 bit, in C. I know that short and int sizes are machine dependent. I tried to use "stdint.h", but it seems that they simply do typedef short int16_t So my question is: Am I missing something and short type guarantees 16 bit length? If no, is there is an alternative that guarantees it? 回答1: That means int16_t is defined as short on your machine, not all machines. Just use the int16_t where you absolutely need a 16bit integer type; it will be