unsigned

Is casting uint8_t to signed int at least sometimes incorrect?

落爺英雄遲暮 提交于 2021-02-10 14:19:30
问题 While reading the answer to the following question Getting a buffer into a stringstream in hex representation: I did not understand why there is a necessity to cast uint8_t to unsigned (or, as written in comments, even to unsigned char before that), while casting just to int is incorrect. As I understand, no conversions at all would lead to interpreting uint8_t as an underlying type which can (must?) be some of 3 char variations and thus printing it as a character. But what's wrong with

Is it guaranteed that assigning -1 to an unsigned type yields the maximum value?

孤者浪人 提交于 2021-02-10 07:27:10
问题 I found a few questions on this particular topic, but they were about C++. How portable is casting -1 to an unsigned type? converting -1 to unsigned types Is it safe to assign -1 to an unsigned int to get the max value? And when reading the answers, it seemed likely or at least not unlikely that this is one of those things where C and C++ differs. The question is simple: If I declare a variable with unsigned char/short/int/long var or use any other unsigned types like fixed width, minimum

Unsigned long and bit shifting

送分小仙女□ 提交于 2021-02-07 12:35:35
问题 I have a problem with bit shifting and unsigned longs. Here's my test code: char header[4]; header[0] = 0x80; header[1] = 0x00; header[2] = 0x00; header[3] = 0x00; unsigned long l1 = 0x80000000UL; unsigned long l2 = ((unsigned long) header[0] << 24) + ((unsigned long) header[1] << 16) + ((unsigned long) header[2] << 8) + (unsigned long) header[3]; cout << l1 << endl; cout << l2 << endl; I would expect l2 to also have a value of 2147483648 but instead it prints 18446744071562067968. I assume

Unsigned long and bit shifting

隐身守侯 提交于 2021-02-07 12:35:02
问题 I have a problem with bit shifting and unsigned longs. Here's my test code: char header[4]; header[0] = 0x80; header[1] = 0x00; header[2] = 0x00; header[3] = 0x00; unsigned long l1 = 0x80000000UL; unsigned long l2 = ((unsigned long) header[0] << 24) + ((unsigned long) header[1] << 16) + ((unsigned long) header[2] << 8) + (unsigned long) header[3]; cout << l1 << endl; cout << l2 << endl; I would expect l2 to also have a value of 2147483648 but instead it prints 18446744071562067968. I assume

*Might* an unsigned char be equal to EOF? [duplicate]

我们两清 提交于 2021-02-07 05:44:13
问题 This question already has answers here : Can sizeof(int) ever be 1 on a hosted implementation? (8 answers) Closed 5 years ago . When using fgetc to read the next character of a stream, you usually check that the end-of-file was not attained by if ((c = fgetc (stream)) != EOF) where c is of int type. Then, either the end-of-file has been attained and the condition will fail, or c shall be an unsigned char converted to int , which is expected to be different from EOF —for EOF is ensured to be

Why does this code contains colon in struct?

孤街浪徒 提交于 2021-02-05 09:31:11
问题 Please explain how this code is executing.why we has used ":" in structures.what is the use of colon in structures.what should be the output of sizeof operator. #include <stdio.h> int main() { struct bitfield { signed int a : 3; unsigned int b : 13; unsigned int c : 1; }; struct bitfield bit1 = { 2, 14, 1 }; printf("%ld", sizeof(bit1)); return 0; } 回答1: The : operator is being used for bit fields, that is, integral values that use the specified number of bits of a larger space. These may get

Implicit conversion warning with own getch function

时间秒杀一切 提交于 2021-01-29 07:19:14
问题 I found a c implementaion of conio.h's getch(). Sadly it compiles with a comversion warning, and i don't know what i should do to solve it correctly. I found this link, but i don't know how to implement it. #include<termios.h> #include<unistd.h> #include<stdio.h> #include"getch.h" /* reads from keypress, doesn't echo */ int getch(void) { struct termios oldattr, newattr; int ch; tcgetattr( STDIN_FILENO, &oldattr ); newattr = oldattr; newattr.c_lflag &= ~( ICANON | ECHO ); tcsetattr( STDIN

Using unsigned char instead of char because of its range

怎甘沉沦 提交于 2021-01-28 12:14:04
问题 I've been working on a small pure C client application (my first :/) which uses TCP socket for communication with the server. The Server sends me a packet (C structure) in which the first byte contains the size of the packet. The problem is that server is using unsigned char to represent the size of the packet because char is signed (from -128 to +127) and +127 is not enough to represent size that can be up to 255 in some packets. => I need a unsigned char buffer; In Linux, the second

distinguishes between signed and unsigned in machine code

守給你的承諾、 提交于 2021-01-28 10:50:25
问题 I was reading a text book saying: It is important to note how machine code distinguishes between signed and unsigned values. Unlike in C, it does not associate a data type with each program value. Instead, it mostly uses the same (assembly)instructions for the two cases, because many arithmetic operations have the same bit-level behavior for unsigned and two’s-complement arithmetic. I don't understand what it means, could anyone provide me an example? 回答1: For example, this code: int main() {

distinguishes between signed and unsigned in machine code

浪子不回头ぞ 提交于 2021-01-28 10:50:18
问题 I was reading a text book saying: It is important to note how machine code distinguishes between signed and unsigned values. Unlike in C, it does not associate a data type with each program value. Instead, it mostly uses the same (assembly)instructions for the two cases, because many arithmetic operations have the same bit-level behavior for unsigned and two’s-complement arithmetic. I don't understand what it means, could anyone provide me an example? 回答1: For example, this code: int main() {