unsigned

Would it break the language or existing code if we'd add safe signed/unsigned compares to C/C++?

荒凉一梦 提交于 2019-11-28 21:18:46
After reading this question on signed/unsigned compares (they come up every couple of days I'd say): Signed / unsigned comparison and -Wall I wondered why we don't have proper signed unsigned compares and instead this horrible mess? Take the output from this small program: #include <stdio.h> #define C(T1,T2)\ {signed T1 a=-1;\ unsigned T2 b=1;\ printf("(signed %5s)%d < (unsigned %5s)%d = %d\n",#T1,(int)a,#T2,(int)b,(a<b));}\ #define C1(T) printf("%s:%d\n",#T,(int)sizeof(T)); C(T,char);C(T,short);C(T,int);C(T,long); int main() { C1(char); C1(short); C1(int); C1(long); } Compiled with my

In C, why is “signed int” faster than “unsigned int”?

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-28 21:02:50
问题 In C, why is signed int faster than unsigned int ? True, I know that this has been asked and answered multiple times on this website (links below). However, most people said that there is no difference. I have written code and accidentally found a significant performance difference. Why would the "unsigned" version of my code be slower than the "signed" version (even when testing the same number)? (I have a x86-64 Intel processor). Similar links Faster comparing signed than unsigned ints

convert unsigned char* to String

你说的曾经没有我的故事 提交于 2019-11-28 20:22:55
I am little poor in type casting. I have a string in xmlChar* (which is unsigned char*), I want to convert this unsigned char to a std::string type. xmlChar* name = "Some data"; I tried my best to type cast , but I couldn't a way to convert it. std::string sName(reinterpret_cast<char*>(name)); reinterpret_cast<char*>(name) casts from unsigned char* to char* in an unsafe way but that's the one which should be used here. Then you call the ordinary constructor of std::string . You could also do it C-style (not recommended): std::string sName((char*) name); 来源: https://stackoverflow.com/questions

Qt 交叉编译经典错误——头文件包含

£可爱£侵袭症+ 提交于 2019-11-28 20:02:36
分析: 这个原因是由于 包含 头文件有误导致的, 我在某个C头文件中包含了C++ 头文件,报错 解决: C 文件包含 C++ 文件 方法如下: #ifdef __cplusplus //bsp_GPIO.h .c 被 cpp文件引用,需要如此添加 extern "C" { #endif //----------本文件需要引出的函数----------// int GPIO_OutEnable(int fd, unsigned int dwEnBits); int GPIO_OutDisable(int fd, unsigned int dwDisBits); int GPIO_OpenDrainEnable(int fd, unsigned int dwODBits); int GPIO_OutSet(int fd, unsigned int dwSetBits); int GPIO_OutClear(int fd, unsigned int dwClearBits); int GPIO_PinState(int fd, unsigned int* pPinState); int GPIO_IrqEnable(int fd, unsigned int dwEnBits); #ifdef __cplusplus } #endif 注:C文件不可以引用C++文件,在现实当中,只能够在C

Can't get rid of “this decimal constant is unsigned only in ISO C90” warning

拜拜、爱过 提交于 2019-11-28 17:51:50
I'm using the FNV hash as a hashing algorithm on my Hash Table implementation but I'm getting the warning in the question title on this line: unsigned hash = 2166136261; I don't understand why this is happening because when I do this: printf("%u\n", UINT_MAX); printf("2166136261\n"); I get this: 4294967295 2166136261 Which seems to be under the limits of my machine... Why do I get the warning and what are my options to get rid of it? unsigned hash = 2166136261u; // note the u. You need a suffix u to signify this is an unsigned number. Without the u suffix it will be a signed number. Since

Calculating bits required to store decimal number

放肆的年华 提交于 2019-11-28 17:47:12
问题 This is a homework question that I am stuck with. Consider unsigned integer representation. How many bits will be required to store a decimal number containing: i) 3 digits ii) 4 digits iii) 6 digits iv) n digits I know that the range of the unsigned integer will be 0 to 2^n but I don't get how the number of bits required to represent a number depends upon it. Please help me out. Thanks in advance. 回答1: Well, you just have to calculate the range for each case and find the lowest power of 2

C#调用C的Dll(类型对照)

不羁的心 提交于 2019-11-28 17:43:32
C#调用C的DLL //C++中的DLL函数原型为 //extern "C" __declspec(dllexport) bool 方法名一(const char* 变量名1, unsigned char* 变量名2) //extern "C" __declspec(dllexport) bool 方法名二(const unsigned char* 变量名1, char* 变量名2) //C#调用C++的DLL搜集整理的所有数据类型转换方式,可能会有重复或者多种方案,自己多测试 //c++:HANDLE(void *) ---- c#:System.IntPtr //c++:Byte(unsigned char) ---- c#:System.Byte //c++:SHORT(short) ---- c#:System.Int16 //c++:WORD(unsigned short) ---- c#:System.UInt16 //c++:INT(int) ---- c#:System.Int16 //c++:INT(int) ---- c#:System.Int32 //c++:UINT(unsigned int) ---- c#:System.UInt16 //c++:UINT(unsigned int) ---- c#:System.UInt32 //c++:LONG

MySQL (一) 数据库定义语言(DDL) CREATE/DROP/ALTER

北城余情 提交于 2019-11-28 17:42:09
MySQL (一) 数据库定义语言(DDL) CREATE/DROP/ALTER 文章导航 18 June 2015 更多 MYSQL 之 数据定义语言(结构定义): Data Definition Language 第一: 创建CREATE 1 创建数据库/模式: CREATE DATABASE/SCHEME 语法 Syntax: create {database | schema} [IF NOT EXISTS] db_name [create_specification [, create_specification] ...]; create_specification 选项: [DEFAULT] CHARACTER SET charset_name | [DEFAULT] COLLATE collation_name; # 栗子: 1. create database blog; 2. create database if not exist blog; 3. create database blog; default character set utf8; 4. create database if not exists blog default character set utf8; 5. create schema blog default character

MySQL (一) 数据库定义语言(DDL) CREATE/DROP/ALTER

爷,独闯天下 提交于 2019-11-28 17:30:13
MySQL (一) 数据库定义语言(DDL) CREATE/DROP/ALTER 文章导航 18 June 2015 更多 MYSQL 之 数据定义语言(结构定义): Data Definition Language 第一: 创建CREATE 1 创建数据库/模式: CREATE DATABASE/SCHEME 语法 Syntax: create {database | schema} [IF NOT EXISTS] db_name [create_specification [, create_specification] ...]; create_specification 选项: [DEFAULT] CHARACTER SET charset_name | [DEFAULT] COLLATE collation_name; # 栗子: 1. create database blog; 2. create database if not exist blog; 3. create database blog; default character set utf8; 4. create database if not exists blog default character set utf8; 5. create schema blog default character

Does cast between signed and unsigned int maintain exact bit pattern of variable in memory?

雨燕双飞 提交于 2019-11-28 17:15:21
I want to pass a 32-bit signed integer x through a socket. In order that the receiver knows which byte order to expect, I am calling htonl(x) before sending. htonl expects a uint32_t though and I want to be sure of what happens when I cast my int32_t to a uint32_t . int32_t x = something; uint32_t u = (uint32_t) x; Is it always the case that the bytes in x and u each will be exactly the same? What about casting back: uint32_t u = something; int32_t x = (int32_t) u; I realise that negative values cast to large unsigned values but that doesn't matter since I'm just casting back on the other end.