bit-manipulation

Why result of unsigned char << unsigned char is not unsigned char

我怕爱的太早我们不能终老 提交于 2019-12-05 05:45:21
I'm getting results from left shift to which I could not find an explanation. unsigned char value = 0xff; // 1111 1111 unsigned char = 0x01; // 0000 0001 std::cout << "SIZEOF value " << sizeof(value) << "\n"; // prints 1 as expected std::cout << "SIZEOF shift " << sizeof(shift) << "\n"; // prints 1 as expected std::cout << "result " << (value << shift) << "\n"; // prints 510 ??? std::cout << "SIZEOF result " << sizeof(value << shift) << "\n"; // prints 4 ??? I was expecting result to be 1111 1110 but instead I get int (?) with value of 1 1111 1110 . How can the bits of an unsigned char be

Is it possible to do bitwise operations on a string in Python?

随声附和 提交于 2019-12-05 05:37:23
This fails, not surprisingly: >>> 'abc' << 8 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for <<: 'str' and 'int' >>> With ascii abc being equal to 011000010110001001100011 or 6382179 , is there a way to shift it some arbitrary amount so 'abc' << 8 would be 01100001011000100110001100000000 ? What about other bitwise operations? 'abc' & 63 = 100011 etc? What you probably want is the bitstring module (see http://code.google.com/p/python-bitstring/ ). It seems to support bitwise operations as well as a bunch of other manipulations

How does ConstantTimeByteEq work?

…衆ロ難τιáo~ 提交于 2019-12-05 05:14:05
In Go's crytography library, I found this function ConstantTimeByteEq . What does it do, how does it work? // ConstantTimeByteEq returns 1 if x == y and 0 otherwise. func ConstantTimeByteEq(x, y uint8) int { z := ^(x ^ y) z &= z >> 4 z &= z >> 2 z &= z >> 1 return int(z) } x ^ y is x XOR y , the result has 1 for the bits x and y are different and 0 for the bits they are same: x = 01010011 y = 00010011 x ^ y = 01000000 ^(x ^ y) negates this, i.e., you get 0 for the bits they are different and 1 otherwise: ^(x ^ y) = 10111111 => z Then we start shifting z to right for masking its bits by itself.

bitwise indexing in C?

大憨熊 提交于 2019-12-05 04:50:00
I'm trying to implement a data compression idea I've had, and since I'm imagining running it against a large corpus of test data, I had thought to code it in C (I mostly have experience in scripting languages like Ruby and Tcl.) Looking through the O'Reilly 'cow' books on C, I realize that I can't simply index the bits of a simple 'char' or 'int' type variable as I'd like to to do bitwise comparisons and operators. Am I correct in this perception? Is it reasonable for me to use an enumerated type for representing a bit (and make an array of these, and writing functions to convert to and from

Multiplying using Bitwise Operators

南楼画角 提交于 2019-12-05 04:45:34
问题 I was wondering how I could go about multiplying a series of binary bits using bitwise operators. However, I'm interested in doing this to find the a decimal fraction value for the binary value. Here's an example of what I'm trying to do: Given, say: 1010010, I want to use each individual bit so that it will be computed as: 1*(2^-1) + 0*(2^-2) + 1*(2^-3) + 0*(2^-4)..... Though I'm interested in doing this in ARM assembly, having an example in C/C++ will still help as well. I was thinking of

How to work with bitfields longer than 64 bits?

懵懂的女人 提交于 2019-12-05 04:33:01
Question says it all. If I have this for a 96-bit field: uint32_t flags[3]; //(thanks @jalf!) How do I best go about accessing this, given that my subfields therein may lie over the 32-bit boundaries (eg. a field that runs from bit 29 to 35)? I need my accesses to be as fast as possible, so I'd rather not iterate over them as 32-bit elements of an array. [This answer is valid for C (and by extension, for C++ as well).] The platform-independent way is to apply bit-masks and bit-shifts as appropriate. So to get your field from 29 to 35 (inclusive): (flags[1] & 0xF) << 3 | (flags[0] & 0xE0000000)

Bit parity code for odd number of bits

落爺英雄遲暮 提交于 2019-12-05 04:31:15
I am trying to find the parity of a bitstring so that it returns 1 if x has an odd # of 0's. I can only use basic bitwise operations and what I have so far passes most of the tests, but I'm wondering 2 things: Why does x ^ (x + ~1) work? I stumbled upon this, but it seems to give you 1 if there are an odd number of bits and something else if even. Like 7^6 = 1 because 7 = 0b0111 Is this the right direction of problem solving for this? I'm assuming my problem is stemming from the first operation, specifically (x + ~1) because it would overflow certain 2's complement numbers. Thanks Code: int

Is there a bit-wise trick for checking the divisibility of a number by 2 or 3?

左心房为你撑大大i 提交于 2019-12-05 04:17:41
I am looking for a bit-wise test equivalent to (num%2) == 0 || (num%3) == 0 . I can replace num%2 with num&1 , but I'm still stuck with num%3 and with the logical-or. This expression is also equivalent to (num%2)*(num%3) == 0 , but I'm not sure how that helps. Yes, though it's not very pretty, you can do something analogous to the old "sum all the decimal digits until you have only one left" trick to test if a number is divisible by 9, except in binary and with divisibility by 3. You can use the same principle for other numbers as well, but many combinations of base/divisor introduce annoying

Bitwise Operations on short

爷,独闯天下 提交于 2019-12-05 04:11:12
I am using a technology called DDS and in the IDL, it does not support int . So, I figured I would just use short . I don't need that many bits. However, when I do this: short bit = 0; System.out.println(bit); bit = bit | 0x00000001; System.out.println(bit); bit = bit & ~0x00000001; bit = bit | 0x00000002; System.out.println(bit); It says "Type mismatch: Cannot convert from int to short". When I change short to long , it works fine. Is it possible to perform bitwise operations like this on a short in Java? Nayuki When doing any arithmetic on byte , short , or char , the numbers are promoted to

# of bits needed to represent a number x

对着背影说爱祢 提交于 2019-12-05 04:01:54
问题 I am currently trying to write an algorithm that determines how many bits are necessary to represent a number x. My implementation will be in c. There are a few catches though, I am restricted to pretty much just the bitwise operators {~, &, ^, |, +, <<, >>}. Also, I cannot use any type of control flow (if, while, for). My original approach was to examine the number in binary from left to right, and look for where there is an occurrence of the first '1'. I am not sure how to approach this