bit-manipulation

How can I bit-reflect a byte in Delphi?

ぐ巨炮叔叔 提交于 2019-12-21 03:37:27
问题 Is there an easy way to bit-reflect a byte variable in Delphi so that the most significant bit (MSB) gets the least significant bit (LSB) and vice versa? 回答1: In code you can do it like this: function ReverseBits(b: Byte): Byte; var i: Integer; begin Result := 0; for i := 1 to 8 do begin Result := (Result shl 1) or (b and 1); b := b shr 1; end; end; But a lookup table would be much more efficient, and only consume 256 bytes of memory. function ReverseBits(b: Byte): Byte; inline; const Table:

2.9999999999999999 >> .5?

被刻印的时光 ゝ 提交于 2019-12-20 18:26:14
问题 I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome: 2.5 >> .5 == 2; 2.9999 >> .5 == 2; 2.999999999999999 >> .5 == 2; // 15 9s 2.9999999999999999 >> .5 == 3; // 16 9s After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2

Fastest way to enumerate through turned on bits of an integer

房东的猫 提交于 2019-12-20 13:59:45
问题 What's the fastest way to enumerate through an integer and return the exponent of each bit that is turned on? Have seen an example using << and another using Math.Pow. Wondering if there is anything else that's really fast. Thanks. 回答1: I imagine bit-shifting would be the fastest. Untested, but I think the following ought to be fast (as fast as IEnumerables are at least). IEnumerable<int> GetExponents(Int32 value) { for(int i=0; i<32; i++) { if(value & 1) yield return i; value >>= 1; } } If

Simple way to set/unset an individual bit

一个人想着一个人 提交于 2019-12-20 11:42:08
问题 Right now I'm using this to set/unset individual bits in a byte: if (bit4Set) nbyte |= (1 << 4); else nbyte &= ~(1 << 4); But, can't you do that in a more simple/elegant way? Like setting or unsetting the bit in a single operation? Note: I understand I can just write a function to do that, I'm just wondering if I won't be reinventing the wheel. 回答1: Sure! It would be more obvious if you expanded the |= and &= in your code, but you can write: nbyte = (nbyte & ~(1<<4)) | (bit4Set<<4); Note that

How can I turn an int into three bytes in Java?

一世执手 提交于 2019-12-20 11:35:32
问题 I am trying to convert an int into three bytes representing that int (big endian). I'm sure it has something to do with bit-wise and and bit shifting. But I have no idea how to go about doing it. For example: int myInt; // some code byte b1, b2 , b3; // b1 is most significant, then b2 then b3. *Note, I am aware that an int is 4 bytes and the three bytes have a chance of over/underflowing. 回答1: To get the least significant byte: b3 = myInt & 0xFF; The 2nd least significant byte: b2 = (myInt >>

How in swift to convert Int16 to two UInt8 Bytes

前提是你 提交于 2019-12-20 10:39:08
问题 I have some binary data that encodes a two byte value as a signed integer. bytes[1] = 255 // 0xFF bytes[2] = 251 // 0xF1 Decoding This is fairly easy - I can extract an Int16 value from these bytes with: Int16(bytes[1]) << 8 | Int16(bytes[2]) Encoding This is where I'm running into issues. Most of my data spec called for UInt and that is easy but I'm having trouble extracting the two bytes that make up an Int16 let nv : Int16 = -15 UInt8(nv >> 8) // fail UInt8(nv) // fail Question How would I

How to generate a sse4.2 popcnt machine instruction

∥☆過路亽.° 提交于 2019-12-20 10:21:38
问题 Using the c program: int main(int argc , char** argv) { return __builtin_popcountll(0xf0f0f0f0f0f0f0f0); } and the compiler line (gcc 4.4 - Intel Xeon L3426): gcc -msse4.2 poptest.c -o poptest I do NOT get the builtin popcnt insruction rather the compiler generates a lookup table and computes the popcount that way. The resulting binary is over 8000 bytes. (Yuk!) Thanks so much for any assistance. 回答1: You have to tell GCC to generate code for an architecture that supports the popcnt

Convert 0x1234 to 0x11223344

谁都会走 提交于 2019-12-20 08:24:21
问题 How do I expand the hexadecimal number 0x1234 to 0x11223344 in a high-performance way? unsigned int c = 0x1234, b; b = (c & 0xff) << 4 | c & 0xf | (c & 0xff0) << 8 | (c & 0xff00) << 12 | (c & 0xf000) << 16; printf("%p -> %p\n", c, b); Output: 0x1234 -> 0x11223344 I need this for color conversion. Users provide their data in the form 0xARGB, and I need to convert it to 0xAARRGGBB . And yes, there could be millions, because each could be a pixel. 1000x1000 pixels equals to one million. The

Saturating subtract/add for unsigned bytes

一曲冷凌霜 提交于 2019-12-20 07:59:02
问题 Imagine I have two unsigned bytes b and x . I need to calculate bsub as b - x and badd as b + x . However, I don't want underflow/overflow occur during these operations. For example (pseudo-code): b = 3; x = 5; bsub = b - x; // bsub must be 0, not 254 and b = 250; x = 10; badd = b + x; // badd must be 255, not 4 The obvious way to do this includes branching: bsub = b - min(b, x); badd = b + min(255 - b, x); I just wonder if there are any better ways to do this, i.e. by some hacky bit

why shift int a=1 to left 31 bits then to right 31 bits, it becomes -1

浪子不回头ぞ 提交于 2019-12-20 07:46:26
问题 given int a = 1; ( 00000000000000000000000000000001 ), what I did is just a=(a<<31)>>31; I assume a should still be 1 after this statement (nothing changed I think). However, it turns out to be -1 ( 11111111111111111111111111111111 ). Anyone knows why? 回答1: What you are missing is that in C++ right shift >> is implementation defined. It could either be logical or arithmetic shift for a signed value. In this case it's shifting in 1 s from the left to retain the sign of the shifted value.