What are bitwise shift (bit-shift) operators and how do they work?

后端 未结 11 1324
生来不讨喜
生来不讨喜 2020-11-21 04:46

I\'ve been attempting to learn C in my spare time, and other languages (C#, Java, etc.) have the same concept (and often the same operators) ...

What I\'m wondering

相关标签:
11条回答
  • 2020-11-21 04:48

    Bit Masking & Shifting

    Bit shifting is often used in low-level graphics programming. For example, a given pixel color value encoded in a 32-bit word.

     Pixel-Color Value in Hex:    B9B9B900
     Pixel-Color Value in Binary: 10111001  10111001  10111001  00000000
    

    For better understanding, the same binary value labeled with what sections represent what color part.

                                     Red     Green     Blue       Alpha
     Pixel-Color Value in Binary: 10111001  10111001  10111001  00000000
    

    Let's say for example we want to get the green value of this pixel's color. We can easily get that value by masking and shifting.

    Our mask:

                      Red      Green      Blue      Alpha
     color :        10111001  10111001  10111001  00000000
     green_mask  :  00000000  11111111  00000000  00000000
    
     masked_color = color & green_mask
    
     masked_color:  00000000  10111001  00000000  00000000
    

    The logical & operator ensures that only the values where the mask is 1 are kept. The last thing we now have to do, is to get the correct integer value by shifting all those bits to the right by 16 places (logical right shift).

     green_value = masked_color >>> 16
    

    Et voilà, we have the integer representing the amount of green in the pixel's color:

     Pixels-Green Value in Hex:     000000B9
     Pixels-Green Value in Binary:  00000000 00000000 00000000 10111001
     Pixels-Green Value in Decimal: 185
    

    This is often used for encoding or decoding image formats like jpg, png, etc.

    0 讨论(0)
  • 2020-11-21 04:48

    Some useful bit operations/manipulations in Python.

    I implemented Ravi Prakash's answer in Python.

    # Basic bit operations
    # Integer to binary
    print(bin(10))
    
    # Binary to integer
    print(int('1010', 2))
    
    # Multiplying x with 2 .... x**2 == x << 1
    print(200 << 1)
    
    # Dividing x with 2 .... x/2 == x >> 1
    print(200 >> 1)
    
    # Modulo x with 2 .... x % 2 == x & 1
    if 20 & 1 == 0:
        print("20 is a even number")
    
    # Check if n is power of 2: check !(n & (n-1))
    print(not(33 & (33-1)))
    
    # Getting xth bit of n: (n >> x) & 1
    print((10 >> 2) & 1) # Bin of 10 == 1010 and second bit is 0
    
    # Toggle nth bit of x : x^(1 << n)
    # take bin(10) == 1010 and toggling second bit in bin(10) we get 1110 === bin(14)
    print(10^(1 << 2))
    
    0 讨论(0)
  • 2020-11-21 04:51

    The bit shifting operators do exactly what their name implies. They shift bits. Here's a brief (or not-so-brief) introduction to the different shift operators.

    The Operators

    • >> is the arithmetic (or signed) right shift operator.
    • >>> is the logical (or unsigned) right shift operator.
    • << is the left shift operator, and meets the needs of both logical and arithmetic shifts.

    All of these operators can be applied to integer values (int, long, possibly short and byte or char). In some languages, applying the shift operators to any datatype smaller than int automatically resizes the operand to be an int.

    Note that <<< is not an operator, because it would be redundant.

    Also note that C and C++ do not distinguish between the right shift operators. They provide only the >> operator, and the right-shifting behavior is implementation defined for signed types. The rest of the answer uses the C# / Java operators.

    (In all mainstream C and C++ implementations including GCC and Clang/LLVM, >> on signed types is arithmetic. Some code assumes this, but it isn't something the standard guarantees. It's not undefined, though; the standard requires implementations to define it one way or another. However, left shifts of negative signed numbers is undefined behaviour (signed integer overflow). So unless you need arithmetic right shift, it's usually a good idea to do your bit-shifting with unsigned types.)


    Left shift (<<)

    Integers are stored, in memory, as a series of bits. For example, the number 6 stored as a 32-bit int would be:

    00000000 00000000 00000000 00000110
    

    Shifting this bit pattern to the left one position (6 << 1) would result in the number 12:

    00000000 00000000 00000000 00001100
    

    As you can see, the digits have shifted to the left by one position, and the last digit on the right is filled with a zero. You might also note that shifting left is equivalent to multiplication by powers of 2. So 6 << 1 is equivalent to 6 * 2, and 6 << 3 is equivalent to 6 * 8. A good optimizing compiler will replace multiplications with shifts when possible.

    Non-circular shifting

    Please note that these are not circular shifts. Shifting this value to the left by one position (3,758,096,384 << 1):

    11100000 00000000 00000000 00000000
    

    results in 3,221,225,472:

    11000000 00000000 00000000 00000000
    

    The digit that gets shifted "off the end" is lost. It does not wrap around.


    Logical right shift (>>>)

    A logical right shift is the converse to the left shift. Rather than moving bits to the left, they simply move to the right. For example, shifting the number 12:

    00000000 00000000 00000000 00001100
    

    to the right by one position (12 >>> 1) will get back our original 6:

    00000000 00000000 00000000 00000110
    

    So we see that shifting to the right is equivalent to division by powers of 2.

    Lost bits are gone

    However, a shift cannot reclaim "lost" bits. For example, if we shift this pattern:

    00111000 00000000 00000000 00000110
    

    to the left 4 positions (939,524,102 << 4), we get 2,147,483,744:

    10000000 00000000 00000000 01100000
    

    and then shifting back ((939,524,102 << 4) >>> 4) we get 134,217,734:

    00001000 00000000 00000000 00000110
    

    We cannot get back our original value once we have lost bits.


    Arithmetic right shift (>>)

    The arithmetic right shift is exactly like the logical right shift, except instead of padding with zero, it pads with the most significant bit. This is because the most significant bit is the sign bit, or the bit that distinguishes positive and negative numbers. By padding with the most significant bit, the arithmetic right shift is sign-preserving.

    For example, if we interpret this bit pattern as a negative number:

    10000000 00000000 00000000 01100000
    

    we have the number -2,147,483,552. Shifting this to the right 4 positions with the arithmetic shift (-2,147,483,552 >> 4) would give us:

    11111000 00000000 00000000 00000110
    

    or the number -134,217,722.

    So we see that we have preserved the sign of our negative numbers by using the arithmetic right shift, rather than the logical right shift. And once again, we see that we are performing division by powers of 2.

    0 讨论(0)
  • 2020-11-21 04:51

    Note that in the Java implementation, the number of bits to shift is mod'd by the size of the source.

    For example:

    (long) 4 >> 65
    

    equals 2. You might expect shifting the bits to the right 65 times would zero everything out, but it's actually the equivalent of:

    (long) 4 >> (65 % 64)
    

    This is true for <<, >>, and >>>. I have not tried it out in other languages.

    0 讨论(0)
  • 2020-11-21 04:55

    One gotcha is that the following is implementation dependent (according to the ANSI standard):

    char x = -1;
    x >> 1;
    

    x can now be 127 (01111111) or still -1 (11111111).

    In practice, it's usually the latter.

    0 讨论(0)
  • 2020-11-21 04:57

    Let's say we have a single byte:

    0110110
    

    Applying a single left bitshift gets us:

    1101100
    

    The leftmost zero was shifted out of the byte, and a new zero was appended to the right end of the byte.

    The bits don't rollover; they are discarded. That means if you left shift 1101100 and then right shift it, you won't get the same result back.

    Shifting left by N is equivalent to multiplying by 2N.

    Shifting right by N is (if you are using ones' complement) is the equivalent of dividing by 2N and rounding to zero.

    Bitshifting can be used for insanely fast multiplication and division, provided you are working with a power of 2. Almost all low-level graphics routines use bitshifting.

    For example, way back in the olden days, we used mode 13h (320x200 256 colors) for games. In Mode 13h, the video memory was laid out sequentially per pixel. That meant to calculate the location for a pixel, you would use the following math:

    memoryOffset = (row * 320) + column
    

    Now, back in that day and age, speed was critical, so we would use bitshifts to do this operation.

    However, 320 is not a power of two, so to get around this we have to find out what is a power of two that added together makes 320:

    (row * 320) = (row * 256) + (row * 64)
    

    Now we can convert that into left shifts:

    (row * 320) = (row << 8) + (row << 6)
    

    For a final result of:

    memoryOffset = ((row << 8) + (row << 6)) + column
    

    Now we get the same offset as before, except instead of an expensive multiplication operation, we use the two bitshifts...in x86 it would be something like this (note, it's been forever since I've done assembly (editor's note: corrected a couple mistakes and added a 32-bit example)):

    mov ax, 320; 2 cycles
    mul word [row]; 22 CPU Cycles
    mov di,ax; 2 cycles
    add di, [column]; 2 cycles
    ; di = [row]*320 + [column]
    
    ; 16-bit addressing mode limitations:
    ; [di] is a valid addressing mode, but [ax] isn't, otherwise we could skip the last mov
    

    Total: 28 cycles on whatever ancient CPU had these timings.

    Vrs

    mov ax, [row]; 2 cycles
    mov di, ax; 2
    shl ax, 6;  2
    shl di, 8;  2
    add di, ax; 2    (320 = 256+64)
    add di, [column]; 2
    ; di = [row]*(256+64) + [column]
    

    12 cycles on the same ancient CPU.

    Yes, we would work this hard to shave off 16 CPU cycles.

    In 32 or 64-bit mode, both versions get a lot shorter and faster. Modern out-of-order execution CPUs like Intel Skylake (see http://agner.org/optimize/) have very fast hardware multiply (low latency and high throughput), so the gain is much smaller. AMD Bulldozer-family is a bit slower, especially for 64-bit multiply. On Intel CPUs, and AMD Ryzen, two shifts are slightly lower latency but more instructions than a multiply (which may lead to lower throughput):

    imul edi, [row], 320    ; 3 cycle latency from [row] being ready
    add  edi, [column]      ; 1 cycle latency (from [column] and edi being ready).
    ; edi = [row]*(256+64) + [column],  in 4 cycles from [row] being ready.
    

    vs.

    mov edi, [row]
    shl edi, 6               ; row*64.   1 cycle latency
    lea edi, [edi + edi*4]   ; row*(64 + 64*4).  1 cycle latency
    add edi, [column]        ; 1 cycle latency from edi and [column] both being ready
    ; edi = [row]*(256+64) + [column],  in 3 cycles from [row] being ready.
    

    Compilers will do this for you: See how GCC, Clang, and Microsoft Visual C++ all use shift+lea when optimizing return 320*row + col;.

    The most interesting thing to note here is that x86 has a shift-and-add instruction (LEA) that can do small left shifts and add at the same time, with the performance as an add instruction. ARM is even more powerful: one operand of any instruction can be left or right shifted for free. So scaling by a compile-time-constant that's known to be a power-of-2 can be even more efficient than a multiply.


    OK, back in the modern days... something more useful now would be to use bitshifting to store two 8-bit values in a 16-bit integer. For example, in C#:

    // Byte1: 11110000
    // Byte2: 00001111
    
    Int16 value = ((byte)(Byte1 >> 8) | Byte2));
    
    // value = 000011111110000;
    

    In C++, compilers should do this for you if you used a struct with two 8-bit members, but in practice they don't always.

    0 讨论(0)
提交回复
热议问题