bit-manipulation

More idiomatic way in Go to encode a []byte slice int an int64?

徘徊边缘 提交于 2019-12-03 12:28:05
Is there a better or more idiomatic way in Go to encode a []byte slice into an int64? package main import "fmt" func main() { var mySlice = []byte{244, 244, 244, 244, 244, 244, 244, 244} var data int64 for i := 0; i < 8; i++ { data |= int64(mySlice[i] & byte(255)) << uint((8*8)-((i+1)*8)) } fmt.Println(data) } http://play.golang.org/p/VjaqeFkgBX You can use encoding/binary's ByteOrder to do this for 16, 32, 64 bit types Play package main import "fmt" import "encoding/binary" func main() { var mySlice = []byte{244, 244, 244, 244, 244, 244, 244, 244} data := binary.BigEndian.Uint64(mySlice) fmt

Hack to convert javascript number to UInt32

久未见 提交于 2019-12-03 12:18:46
Edit: This question is out of date as the Polyfill example has been updated. I'm leaving the question here just for reference. Read the correct answer for useful information on bitwise shift operators. Question: On line 7 in the Polyfill example of the Mozilla Array.prototype.indexOf page they comment this: var length = this.length >>> 0; // Hack to convert object.length to a UInt32 But the bitwise shift specification on Mozilla clearly states that the operator returns a value of the same type as the left operand: Shift operators convert their operands to thirty-two-bit integers and return a

From hexadecimal to one's complement in Python

元气小坏坏 提交于 2019-12-03 12:13:41
Is there an easy way to produce a one's complement in python? For instance, if you take the hex value 0x9E , I need to convert it to 0x61 . I need to swap the binary 1's for 0's and 0's for 1's. It feels like this should be simple. Just use the XOR operator ^ against 0xFF: >>> hex(0x9E ^ 0xFF) '0x61' If you need to work with values larger than a byte, you could create the mask from the int.bit_length() method on your value: >>> value = 0x9E >>> mask = (1 << value.bit_length()) - 1 >>> hex(value ^ mask) '0x61' >>> value = 0x9E9E >>> mask = (1 << value.bit_length()) - 1 >>> hex(value ^ mask)

Writing files in bit form to a file in C

最后都变了- 提交于 2019-12-03 12:10:06
I am implementing the huffman algorithm in C. I have got the basic functionality down up to the point where the binary codewords are obtained. so for example, abcd will be 100011000 or something similar. now the question is how do you write this code in binary form in the compressed file. I mean if I write it normally each 1 and 0 will be one character so there is no compression. I need to write those 1s and 0s in their bit form. is that possible in C. if so how? Collect bits until you have enough bits to fill a byte and then write it.. E.g. something like this: int current_bit = 0; unsigned

Templatized branchless int max/min function

好久不见. 提交于 2019-12-03 11:58:46
问题 I'm trying to write a branchless function to return the MAX or MIN of two integers without resorting to if (or ?:). Using the usual technique I can do this easily enough for a given word size: inline int32 imax( int32 a, int32 b ) { // signed for arithmetic shift int32 mask = a - b; // mask < 0 means MSB is 1. return a + ( ( b - a ) & ( mask >> 31 ) ); } Now, assuming arguendo that I really am writing the kind of application on the kind of in-order processor where this is necessary, my

Bitwise operations with CGBitmapInfo and CGImageAlphaInfo

梦想与她 提交于 2019-12-03 11:51:05
I'm having trouble performing bitwise operations with CGImageAlphaInfo and CGBitmapInfo in Swift. In particular, I don't know how to port this Objective-C code: bitmapInfo &= ~kCGBitmapAlphaInfoMask; bitmapInfo |= kCGImageAlphaNoneSkipFirst; The following straightforward Swift port produces the somewhat cryptic compiler error 'CGBitmapInfo' is not identical to 'Bool' on the last line: bitmapInfo &= ~CGBitmapInfo.AlphaInfoMask bitmapInfo |= CGImageAlphaInfo.NoneSkipFirst Looking at the source code I noticed that CGBitmapInfo is declared as a RawOptionSetType while CGImageAlphaInfo isn't. Maybe

Bit hack: Expanding bits

时光总嘲笑我的痴心妄想 提交于 2019-12-03 11:35:36
I am trying to convert a uint16_t input to a uint32_t bit mask. One bit in the input toggles two bits in the output bit mask. Here is an example converting a 4-bit input to an 8-bit bit mask: Input Output ABCDb -> AABB CCDDb A,B,C,D are individual bits Example outputs: 0000b -> 0000 0000b 0001b -> 0000 0011b 0010b -> 0000 1100b 0011b -> 0000 1111b .... 1100b -> 1111 0000b 1101b -> 1111 0011b 1110b -> 1111 1100b 1111b -> 1111 1111b Is there a bithack-y way to achieve this behavior? thndrwrks Interleaving bits by Binary Magic Numbers contained the clue: uint32_t expand_bits(uint16_t bits) {

Power set generated by bits

我们两清 提交于 2019-12-03 11:33:28
问题 I have this code which generates power set of an array of size 4 (number is just example, less combinations to write...). #define ARRAY_SIZE 4 unsigned int i, j, bits, i_max = 1U << ARRAY_SIZE; int array[ARRAY_SIZE]; for (i = 0; i < i_max ; ++i) { for (bits = i, j = 0; bits; bits >>= 1, ++j) { if (bits & 1) printf("%d", array[j]); } } Output: {} {1} {2} {1, 2} {3} {1, 3} {2, 3} {1, 2, 3} {4} {1, 4} {2, 4} {1, 2, 4} {3, 4} {1, 3, 4} {2, 3, 4} {1, 2, 3, 4} I need that output to be like this one

Will bit-shift by zero bits work correctly?

那年仲夏 提交于 2019-12-03 11:29:32
问题 Say I have a function like this: inline int shift( int what, int bitCount ) { return what >> bitCount; } It will be called from different sites each time bitCount will be non-negative and within the number of bits in int . I'm particularly concerned about call with bitCount equal to zero - will it work correctly then? Also is there a chance that a compiler seeing the whole code of the function when compiling its call site will reduce calls with bitCount equal to zero to a no-op? 回答1: It is

How to create mask with least significat bits set to 1 in C

佐手、 提交于 2019-12-03 11:23:05
Can someone please explain this function to me? A mask with the least significant n bits set to 1. Ex: n = 6 --> 0x2F, n = 17 --> 0x1FFFF // I don't get these at all, especially how n = 6 --> 0x2F Also, what is a mask? The usual way is to take a 1 , and shift it left n bits. That will give you something like: 00100000 . Then subtract one from that, which will clear the bit that's set, and set all the less significant bits, so in this case we'd get: 00011111 . A mask is normally used with bitwise operations, especially and . You'd use the mask above to get the 5 least significant bits by