bit-shift

c/c++ left shift unsigned vs signed

∥☆過路亽.° 提交于 2020-01-01 04:49:28
问题 I have this code. #include <iostream> int main() { unsigned long int i = 1U << 31; std::cout << i << std::endl; unsigned long int uwantsum = 1 << 31; std::cout << uwantsum << std::endl; return 0; } It prints out. 2147483648 18446744071562067968 on Arch Linux 64 bit, gcc, ivy bridge architecture. The first result makes sense, but I don't understand where the second number came from. 1 represented as a 4byte int signed or unsigned is 00000000000000000000000000000001 When you shift it 31 times

Why Integer numberOfLeadingZeros and numberOfTrailingZeros use different implementations?

自作多情 提交于 2019-12-31 07:41:54
问题 In JDK 8: public static int numberOfLeadingZeros(int i) { if (i == 0) return 32; int n = 1; // if the first leftest one bit occurs in the low 16 bits if (i >>> 16 == 0) { n += 16; i <<= 16; } if (i >>> 24 == 0) { n += 8; i <<= 8; } if (i >>> 28 == 0) { n += 4; i <<= 4; } if (i >>> 30 == 0) { n += 2; i <<= 2; } n -= i >>> 31; return n; } By judging if the leftest first one bit is in the low x bits. public static int numberOfTrailingZeros(int i) { int y; if (i == 0) return 32; int n = 31; // if

Find most significant set bit in a long

為{幸葍}努か 提交于 2019-12-31 06:41:07
问题 I'm in the unique situation where searching for "most significant bit" yields too many results and I can't find an answer that fits my needs! The question itself is pretty simple: "How do I find the most significant set bit in an unsigned long?" When I do my calculations the rightmost bit position is position '0'. I know that it involves masking the lowest bit, checking and then shifting left to once while incrementing my count, and then repeating with the 2nd lowest, etc. I've done this

Using logical bitshift for RGB values

两盒软妹~` 提交于 2019-12-30 06:54:22
问题 I'm a bit naive when it comes to bitwise logic and I have what is probably a simple question... basically if I have this (is ActionScript but can apply in many languages): var color:uint = myObject.color; var red:uint = color >>> 16; var green:uint = color >>> 8 & 0xFF; var blue:uint = color & 0xFF; I was wondering what exactly the `& 0xFF' is doing to green and blue. I understand what an AND operation does, but why is it needed (or a good idea) here? The source for this code was here: http:/

left shifting of a two's complement vector VHDL

半城伤御伤魂 提交于 2019-12-25 18:45:42
问题 I'm trying to solve some exercises, I have to shift a 8-bit vector named A into 2A (A+A). my soluction for this is: (A(7) and '1') & A(6 downto 0) & '0'; after this I made a two's complement of A in this way: entity complementare is port(a: in std_logic_vector(7 downto 0); b: out std_logic_vector(7 downto 0)); end complementare; architecture C of complementare is signal mask, temp: std_logic_vector(7 downto 0); component ripplecarry8bit is port(a,b: std_logic_vector(7 downto 0); cin: in std

Shift operator in C prepends ones instead of zeros

£可爱£侵袭症+ 提交于 2019-12-25 12:49:22
问题 Here is the code: #define u8 char #define u32 unsigned int typedef struct { //decoded instruction fields u8 cond; // Condition (f.ex. 1110 for always true) u8 instruction_code; // Is a constant since we only use branch u32 offset; // Offset from current PC } dcdinst; u8 mem[1024]; mem[0x0] = 0b11101010; u8* instruction_addr = &mem[pc]; if (instruction_addr == NULL) { return false; } unsigned int first_part = instruction_addr[0]; // Here is the code that presents a problem: // I try to get the

Shift operator in C prepends ones instead of zeros

好久不见. 提交于 2019-12-25 12:49:09
问题 Here is the code: #define u8 char #define u32 unsigned int typedef struct { //decoded instruction fields u8 cond; // Condition (f.ex. 1110 for always true) u8 instruction_code; // Is a constant since we only use branch u32 offset; // Offset from current PC } dcdinst; u8 mem[1024]; mem[0x0] = 0b11101010; u8* instruction_addr = &mem[pc]; if (instruction_addr == NULL) { return false; } unsigned int first_part = instruction_addr[0]; // Here is the code that presents a problem: // I try to get the

What is the significance of these #defines?

泪湿孤枕 提交于 2019-12-25 01:42:54
问题 I was going through the best solution of problem MAXCOUNT and found few lines which i didn't understand. Codechef Problem and the Best Solution submitted While I was reading the code to observe the approach, I encountered these lines at the top of code: #define isSet(n) flags[n>>5]&(1<<(n&31)) #define unset(n) flags[n>>5] &= ~(1<<(n&31)) #define set(n) flags[n>>5]|=(1<<(n&31)) I have no idea what is the significance of using these lines. Can anyone please explain these lines and why are they

Implementing a logical shift right

喜欢而已 提交于 2019-12-25 01:37:14
问题 So I'm working on the nand2tetris project, and I want to implement shift right logical on a software level since the hardware doesn't support it. I know shift right logical is a division by two. So my first shot at implementing it would count the number of times I was able to subtract 2 from the initial value before the value became 0 or negative. Similar for if the number was negative. But, I've found a scenario where it's not working. I want to shift right -27139. Well the binary value

Bit shifting x * a number

旧街凉风 提交于 2019-12-24 16:43:08
问题 How do you get number like -10 from these bit shifting practice problems? From what I understand X*32 can be written as x<<5 . But how are you to get numbers like x*66 , or X*(-10) ? 回答1: General Explanation Bit shifting is primarily aimed to shift the binary representation of a number. It is not for multiplication. 23 = 0001 0111 23 << 1 = 0001 0111 << 1 = 0010 1110 = 46 However, as the binary representation of a number is changed, the number it represents is also changed. This is just how