bit-manipulation

returns x with the n bits that begin at position p set to the rightmost n bits of y, leaving other bits unchanged

僤鯓⒐⒋嵵緔 提交于 2019-12-24 02:55:08
问题 my solution get the rightmost n bits of y a = ~(~0 << n) & y clean the n bits of x beginning from p c = ( ~0 << p | ~(~0 << (p-n+1))) & x set the cleaned n bits to the n rightmost bits of y c | (a << (p-n+1)) it is rather long statements. do we have a better one? i.e x = 0 1 1 1 0 1 1 0 1 1 1 0 p = 4 y = 0 1 0 1 1 0 1 0 1 0 n = 3 the 3 rightmost bits of y is 0 1 0 it will replace x from bits 4 to bits 2 which is 1 1 1 回答1: I wrote similar one: unsigned setbits (unsigned x, int p, int n,

Help translating Reflector deconstruction into compilable code

岁酱吖の 提交于 2019-12-24 01:24:47
问题 So I am Reflector-ing some framework 2.0 code and end up with the following deconstruction fixed (void* voidRef3 = ((void*) &_someMember)) { ... } This won't compile due to ' The right hand side of a fixed statement assignment may not be a cast expression ' I understand that Reflector can only approximate and generally I can see a clear path but this is a bit outside my experience. Question: what is Reflector trying to describe to me? Update: Am also seeing the following fixed (IntPtr*

Python get least significant digits from a float (without using string operations)

感情迁移 提交于 2019-12-24 01:19:44
问题 Assuming I have the float 12345.6789 and I want to get the six least significant digits (i.e. 45.6789) as an int (i.e. 456789) using bit operations in python (v. 2.6). How do I do that? Thanks PS I do not want to use string operations even if it would be rather easy to: for any float f: int(str(int(f * 1000))[-10:]) EDIT: This original question is pointless, as shown by comments within. Many apologies... instead methods on getting the least significant digits without using strings are shown

Difference between ^ Operator in JS and Python

笑着哭i 提交于 2019-12-23 23:52:18
问题 I need to port some JS code which involves Math.random()*2147483648)^(new Date).getTime() . While it looks like for smaller numbers, the python function and the JS function are equivalent in function, but with large numbers like this, the values end up entirely different. Python: >>> 2147483647 ^ 1257628307380 1257075044427 Javascript: > 2147483647 ^ 1257628307380 -1350373301 How can I get the Javascript value from python? 回答1: Python has unlimited-precision integers, while Javascript is

How could I count bit from large number in javascript?

我只是一个虾纸丫 提交于 2019-12-23 22:15:52
问题 I have a large number stored in string. let txt = '10000000000000041'; So how could I count bit presenting in it's a binary format. for example, the binary format of 9 is 1001, and no of 1's is 2. What I did so far: const countOne = (num) => { let c = 0; while (num > 0) { num &= num - 1; c++; } return c; } console.log(countOne(+'9')); console.log(countOne(+'10000000000000041')); This code is working fine, but not for large value, because Number in JavaScript cannot hold such large value, so

Using bitwise operations

主宰稳场 提交于 2019-12-23 18:44:59
问题 How often you use bitwise operation "hacks" to do some kind of optimization? In what kind of situations is it really useful? Example: instead of using if: if (data[c] >= 128) //in a loop sum += data[c]; you write: int t = (data[c] - 128) >> 31; sum += ~t & data[c]; Of course assuming it does the same intended result for this specific situation. Is it worth it? I find it unreadable. How often do you come across this? Note: I saw this code here in the chosen answers :Why is processing a sorted

Reproduce _mm256_sllv_epi16 and _mm256_sllv_epi8 in AVX2

最后都变了- 提交于 2019-12-23 17:08:22
问题 I was surprised to see that _mm256_sllv_epi16/8(__m256i v1, __m256i v2) and _mm256_srlv_epi16/8(__m256i v1, __m256i v2) was not in the Intel Intrinsics Guide and I don't find any solution to recreate that AVX512 intrinsic with only AVX2. This function left shifts all 16/8bits packed int by the count value of corresponding data elements in v2. Example for epi16: __m256i v1 = _mm256_set1_epi16(0b1111111111111111); __m256i v2 = _mm256_setr_epi16(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15); v1 =

How do I get a specific bit from an Integer in Swift?

泄露秘密 提交于 2019-12-23 16:28:17
问题 Trying to convert my app to Swift from C++ C++: static QWORD load64(const OCTET *x) { char i; QWORD u=0; for(i=7; i>=0; --i) { u <<= 8; u |= x[i]; } return u; } Swift: func load64(x: UInt8) -> UInt64 { var u: UInt64 = 0; for var i=7; i>=0; --i { u <<= 8; u |= x[i]; } return u; } But this line doesnt work in Swift: u |= x[i]; And I can't seem to find any reference to selecting a specific bit from an integer... anyone know how? 回答1: It is possible to use the |= operator in Swift, and it works

How to bitwise operate on memory block (C++)

我的梦境 提交于 2019-12-23 16:14:23
问题 Is there a better (faster/more efficient) way to perform a bitwise operation on a large memory block than using a for loop? After looking it to options I noticed that std has a member std::bitset , and was also wondering if it would be better (or even possible) to convert a large region of memory into a bitset without changing its values, then perform the operations, and then switch its type back to normal? Edit / update: I think union might apply here, such that the memory block is allocated

Extract 14-bit values from an array of bytes in C

牧云@^-^@ 提交于 2019-12-23 15:08:29
问题 In an arbitrary-sized array of bytes in C, I want to store 14-bit numbers (0-16,383) tightly packed. In other words, in the sequence: 0000000000000100000000000001 there are two numbers that I wish to be able to arbitrarily store and retrieve into a 16-bit integer. (in this case, both of them are 1, but could be anything in the given range) If I were to have the functions uint16_t 14bitarr_get(unsigned char* arr, unsigned int index) and void 14bitarr_set(unsigned char* arr, unsigned int index,