bit-manipulation

Perform logical shift using arithmetic shift operator in C [duplicate]

扶醉桌前 提交于 2019-12-18 09:45:39
问题 This question already has answers here : Implementing Logical Right Shift in C (8 answers) Closed 11 months ago . Right now I am reading the book Computer Systems : Programmer Perspective. One problem in the book says to perform a logical right shift on a signed integer, I can't figure out how to start on this. The following is the actual question from the book: Fill in code for the following C functions. Function srl performs a logical right shift using an arithmetic right shift (given by

Hamming weight ( number of 1 in a number) mixing C with assembly

微笑、不失礼 提交于 2019-12-18 09:08:49
问题 I'm trying to count how many number 1, are in the numbers of an array. First I have a code in C lenguaje(work ok): int popcount2(int* array, int len){ int i; unsigned x; int result=0; for (i=0; i<len; i++){ x = array[i]; do{ result+= x & 0x1; x>>= 1; } while(x); } return result; } Now I need to translate the do-while loop into Assembly using 3-6 lines of code. I have write some code but the result is not correct.( I'm new in the assembly world ) int popcount3(int* array, int len){ int i;

Time complexity of an iterative algorithm

限于喜欢 提交于 2019-12-18 08:56:24
问题 I am trying to find the Time Complexity of this algorithm. The iterative: algorithm produces all the bit-strings within a given Hamming distance, from the input bit-string. It generates all increasing sequences 0 <= a[0] < ... < a[dist-1] < strlen(num) , and reverts bits at corresponding indices. The vector a is supposed to keep indices for which bits have to be inverted. So if a contains the current index i , we print 1 instead of 0 and vice versa. Otherwise we print the bit as is (see else

Time complexity of an iterative algorithm

↘锁芯ラ 提交于 2019-12-18 08:56:22
问题 I am trying to find the Time Complexity of this algorithm. The iterative: algorithm produces all the bit-strings within a given Hamming distance, from the input bit-string. It generates all increasing sequences 0 <= a[0] < ... < a[dist-1] < strlen(num) , and reverts bits at corresponding indices. The vector a is supposed to keep indices for which bits have to be inverted. So if a contains the current index i , we print 1 instead of 0 and vice versa. Otherwise we print the bit as is (see else

Bit shifting a byte by more than 8 bit

最后都变了- 提交于 2019-12-18 06:57:19
问题 In here When converting from bytes buffer back to unsigned long int: unsigned long int anotherLongInt; anotherLongInt = ( (byteArray[0] << 24) + (byteArray[1] << 16) + (byteArray[2] << 8) + (byteArray[3] ) ); where byteArray is declared as unsigned char byteArray[4]; Question: I thought byteArray[1] would be just one unsigned char (8 bit). When left-shifting by 16, wouldn't that shift all the meaningful bits out and fill the entire byte with 0? Apparently it is not 8 bit. Perhaps it's

`Math.trunc` vs `|0` vs `<<0` vs `>>0` vs `&-1` vs `^0`

空扰寡人 提交于 2019-12-18 06:12:33
问题 I have just found that in ES6 there's a new math method: Math.trunc . I have read its description in MDN article, and it sounds like using |0 . Moreover, <<0 , >>0 , &-1 , ^0 also do similar things (thanks @kojiro & @Bergi). After some tests, it seems that the only differences are: Math.trunc returns -0 with numbers in interval (-1,-0] . Bitwise operators return 0 . Math.trunc returns NaN with non numbers. Bitwise operators return 0 . Are there more differences (among all of them)? n | Math

Divide a signed integer by a power of 2

ε祈祈猫儿з 提交于 2019-12-18 05:55:58
问题 I'm working on a way to divide a signed integer by a power of 2 using only binary operators (<< >> + ^ ~ & | !), and the result has to be round toward 0. I came across this question also on Stackoverflow on the problem, however, I cannot understand why it works. Here's the solution: int divideByPowerOf2(int x, int n) { return (x + ((x >> 31) & ((1 << n) + ~0))) >> n; } I understand the x >> 31 part (only add the next part if x is negative, because if it's positive x will be automatically

Find the first zero in a bitarray

♀尐吖头ヾ 提交于 2019-12-18 05:16:06
问题 I have a bitmap uint64_t bitmap[10000] to keep track of the resources allocated in the system. Now the question is how do efficiently I find the first unset(zero) bit in this bitmap? I am aware that there is ffsll(unsigned long long) in glibc for finding the first set bit, which I assume uses hardware instructions to do it. To use this function in my case, first I need to initialize the array to set every bit to 1, then when I do the resource allocation, I have to linearly search the array

Set a specific bit in an int

旧城冷巷雨未停 提交于 2019-12-18 04:39:24
问题 I need to mask certain string values read from a database by setting a specific bit in an int value for each possible database value. For example, if the database returns the string "value1" then the bit in position 0 will need to be set to 1, but if the database returns "value2" then the bit in position 1 will need to be set to 1 instead. How can I ensure each bit of an int is set to 0 originally and then turn on just the specified bit? 回答1: If you have an int value " intValue " and you want

Bit-flipping operations in T-SQL

一世执手 提交于 2019-12-18 04:33:32
问题 I have a bitmasked int field in my database. Usually I manage it through C# code, but now I need to flip a bit in the mask using T-SQL How do I accomplish the following: The bit I want to flip: 1 << 8 (256) The mask value before I flip: 143 The mask value after I flip: 399 This can be done without the bit operators that's missing in T-SQL, right? 回答1: Use XOR: SELECT value ^ 256 So in your case, SELECT 143 ^ 256 will indeed return 399. If you want to pass in the exponent as well: SELECT value