bit-shift

Why use the Bitwise-Shift operator for values in a C enum definition?

两盒软妹~` 提交于 2019-11-26 15:32:15
问题 Apple sometimes uses the Bitwise-Shift operator in their enum definitions. For example, in the CGDirectDisplay.h file which is part of Core Graphics: enum { kCGDisplayBeginConfigurationFlag = (1 << 0), kCGDisplayMovedFlag = (1 << 1), kCGDisplaySetMainFlag = (1 << 2), kCGDisplaySetModeFlag = (1 << 3), kCGDisplayAddFlag = (1 << 4), kCGDisplayRemoveFlag = (1 << 5), kCGDisplayEnabledFlag = (1 << 8), kCGDisplayDisabledFlag = (1 << 9), kCGDisplayMirrorFlag = (1 << 10), kCGDisplayUnMirrorFlag = (1 <

Weird behavior of right shift operator (1 >> 32)

…衆ロ難τιáo~ 提交于 2019-11-26 15:28:54
I recently faced a strange behavior using the right-shift operator. The following program: #include <cstdio> #include <cstdlib> #include <iostream> #include <stdint.h> int foo(int a, int b) { return a >> b; } int bar(uint64_t a, int b) { return a >> b; } int main(int argc, char** argv) { std::cout << "foo(1, 32): " << foo(1, 32) << std::endl; std::cout << "bar(1, 32): " << bar(1, 32) << std::endl; std::cout << "1 >> 32: " << (1 >> 32) << std::endl; //warning here std::cout << "(int)1 >> (int)32: " << ((int)1 >> (int)32) << std::endl; //warning here return EXIT_SUCCESS; } Outputs: foo(1, 32): 1

Have you ever had to use bit shifting in real projects?

北战南征 提交于 2019-11-26 15:08:19
问题 Have you ever had to use bit shifting in real programming projects? Most (if not all) high level languages have shift operators in them, but when would you actually need to use them? 回答1: I still write code for systems that do not have floating point support in hardware. In these systems you need bit-shifting for nearly all your arithmetic. Also you need shifts to generate hashes. Polynomial arithmetic (CRC, Reed-Solomon Codes are the mainstream applications) or uses shifts as well. However,

Why does combining two shifts of a uint8_t produce a different result?

拜拜、爱过 提交于 2019-11-26 14:51:59
问题 Could someone explain me why: x = x << 1; x = x >> 1; and: x = (x << 1) >> 1; produce different answers in C? x is a *uint8_t* type (unsigned 1-byte long integer). For example when I pass it 128 (10000000) in the first case it returns 0 (as expected most significant bit falls out) but in the second case it returns the original 128 . Why is that? I'd expect these expressions to be equivalent? 回答1: This is due to integer promotions, both operands of the bit-wise shifts will be promoted to int

What does the C standard say about bitshifting more bits than the width of type?

孤者浪人 提交于 2019-11-26 14:43:32
Consider the following code: int i = 3 << 65; I would expect that the result is i==0 , however the actual result is i==6 . With some testing I found that with the following code: int i, s; int a = i << s; int b = i << (s & 31); the values of a and b are always the same. Does the C standard say anything about shifting more than 32 bits (the width of type int ) or is this unspecified behavior? From my WG12/N1124 draft ( not the standard, but Good Enough For Me), there's the following block in 6.5.7 Bitwise shift operators : If the value of the right operand is negative or is greater than or

Declaring 64-bit variables in C

早过忘川 提交于 2019-11-26 14:26:22
问题 I have a question. uint64_t var = 1; // this is 000000...00001 right? And in my code this works: var ^ (1 << 43) But how does it know 1 should be in 64 bits? Shouldn’t I write this instead? var ^ ( (uint64_t) 1 << 43 ) 回答1: As you supposed, 1 is a plain signed int (which probably on your platform is 32 bit wide in 2's complement arithmetic), and so is 43, so by any chance 1<<43 results in an overflow: in facts, if both arguments are of type int operator rules dictate that the result will be

Bitwise shift operators. Signed and unsigned

╄→尐↘猪︶ㄣ 提交于 2019-11-26 13:58:51
问题 I'm practising for the SCJP exam using cram notes from the Internet. According to my notes the >> operator is supposed to be signed right shift, with the sign bit being brought in from the left. While the left shift operator << is supposed to preserve the sign bit. Playing around however, I'm able to shift the sign with the << operator (f.e. Integer.MAX_VALUE << 1 evaluates to -2 , while I'm never able to shift the sign with the >> operator. I must be misunderstanding something here, but what

Bitwise operators and “endianness”

白昼怎懂夜的黑 提交于 2019-11-26 13:00:40
Does endianness matter at all with the bitwise operations? Either logical or shifting? I'm working on homework with regard to bitwise operators, and I can not make heads or tails on it, and I think I'm getting quite hung up on the endianess. That is, I'm using a little endian machine (like most are), but does this need to be considered or is it a wasted fact? In case it matters, I'm using C. Endianness only matters for layout of data in memory. As soon as data is loaded by the processor to be operated on, endianness is completely irrelevent. Shifts, bitwise operations, and so on perform as you

Behaviour of unsigned right shift applied to byte variable

99封情书 提交于 2019-11-26 12:44:37
问题 Consider the following snip of java code byte b=(byte) 0xf1; byte c=(byte)(b>>4); byte d=(byte) (b>>>4); output: c=0xff d=0xff expected output: c=0x0f how? as b in binary 1111 0001 after unsigned right shift 0000 1111 hence 0x0f but why is it 0xff how? 回答1: The problem is that all arguments are first promoted to int before the shift operation takes place: byte b = (byte) 0xf1; b is signed, so its value is -15. byte c = (byte) (b >> 4); b is first sign-extended to the integer -15 = 0xfffffff1

What is the JavaScript >>> operator and how do you use it?

纵然是瞬间 提交于 2019-11-26 11:54:06
I was looking at code from Mozilla that add a filter method to Array and it had a line of code that confused me. var len = this.length >>> 0; I have never seen >>> used in JavaScript before. What is it and what does it do? bobince It doesn't just convert non-Numbers to Number, it converts them to Numbers that can be expressed as 32-bit unsigned ints. Although JavaScript's Numbers are double-precision floats(*), the bitwise operators ( << , >> , & , | and ~ ) are defined in terms of operations on 32-bit integers. Doing a bitwise operation converts the number to a 32-bit signed int, losing any