bit-manipulation

How to rotate the bits in a word

最后都变了- 提交于 2019-12-02 03:33:18
问题 I'm using a dsPIC33F and GCC. I want to rotate the bits in a word once left or right, like this: MSB LSB input: 0101 1101 0101 1101 right: 1010 1110 1010 1110 left : 1011 1010 1011 1010 (In case it's not clear, the LSB moves into the MSB's position for the right rotate and vice versa.) My processor already has a rotate right (rrnc, rrc) and rotate left instruction (rlnc, rlc), so I'm hoping the compiler will optimise this in. If not, I might have to use inline assembly. 回答1: You may write

Explain the following C++ method

偶尔善良 提交于 2019-12-02 03:28:30
#define XL 33 #define OR 113 #define NOR 313 #define TN 344 int to_bits(int critn,char *mask) { unsigned int x; int begin; if (critn < XL) begin = 1; else if (critn < OR) begin = XL; else if (critn < NOR) begin = OR; else if (critn <= TN) begin = NOR; else begin = 0; x = critn - begin; *mask = (char)(0x80 >> (x % 8)); return (int)(x >> 3); // fast divide by 8 } I don't have any knowledge of C++ code. Can any one explain what this method is doing in the last 2 lines? Thanks In C++, just like most programming languages, you can only return one value. To "return" two values, it's a common C/C++

How to convert from sign-magnitude to two's complement

£可爱£侵袭症+ 提交于 2019-12-02 03:27:16
How would I convert from sign-magnitude to two's complement. I don't know where to start. Any help would be appreciated. I can only use the following operations:!,~,|,&,^,+,>>,<<. /* * sm2tc - Convert from sign-magnitude to two's complement * where the MSB is the sign bit * Example: sm2tc(0x80000005) = -5. * */ int sm2tc(int x) { return 2; } You can convert signed-magnitude to two's complement by subtracting the number from 0x80000000 if the number is negative. This will work for a 32-bit integer on a machine using two's complement to represent negative values, but if the value is positive

Bitstream of variable-length Huffman codes - How to write to file?

Deadly 提交于 2019-12-02 03:04:19
问题 I'm working on a Huffman coding/decoding project in C and have a good understanding of how the algorithm should store information about the Huffman tree, re-build the tree during decoding, and decompress to the original input file using variable-length codes. When writing to my compressed file, I will output a table of 256 4-byte integers containing unique frequencies, and I know I will also have to figure out a way to handle EOF - worrying about that later. My question is how should I

Best way to store / retrieve bits C# [duplicate]

旧时模样 提交于 2019-12-02 03:00:32
This question already has an answer here: Best way to store long binary (up to 512 bit) in C# 5 answers I am modifying an existing C# solution, wherein data is validated and status is stored as below: a) A given record is validated against certain no. of conditions (say 5). Failed / passed status is represented by a bit value (0 - passed; 1 - failed) b) So, if a record failed for all 5 validations, value will be 11111. This will be converted to a decimal and stored in a DB. Once again, this decimal value will be converted back to binary (using bitwise & operator) which will be used to show the

Extracting bits using bit manipulation

梦想的初衷 提交于 2019-12-02 02:57:17
I have a 32-bit unsigned int and I need to extract bits at given positions and make a new number out of those bits. For example, if I have a 0xFFFFFFFF and want bits 0,10,11 my result will be 7 (111b). This is my attempt, it extracts the bits correctly but doesn't create the correct result. I'm shifting the result 1 place left and ANDing it with my extracted bit, apparenlty this is incorrect though? I'm also sure there is probably a much more elegant way to do this? #define TEST 0xFFFFFFFF unsigned int extractBits(unsigned short positions[], unsigned short count, unsigned int bytes) { unsigned

How to set and clear different bits with a single line of code (C)

大兔子大兔子 提交于 2019-12-02 02:45:37
data |= (1 << 3) sets bit (3) without disrupting other bits. data &= ~(1 << 4) resets bit (4) without disrupting other bits. How can I accomplish both tasks in a single instruction? (As this is really only for readability, I plan on #define ing this in a cute way like #define gpioHigh(x) <insert code> . The alternative is to figure out how to correctly pass a gpio pointer into functions that I write expressly for this purpose, but eff that) Thanks! Mike It's not possible in a single instruction. This is because there are 3 possible operations you need to do on the different bits: Set them (bit

Why does << 32 not result in 0 in javascript?

牧云@^-^@ 提交于 2019-12-02 02:25:44
This is false: (0xffffffff << 31 << 1) === (0xffffffff << 32) It seems like it should be true. Adding >>> 0 anywhere does not change this. Why is this and how can I correctly write code that handles << 32 ? The shift operators always effectively has a right operand in the range 0-31. From the Mozilla docs : Shift operators convert their operands to 32-bit integers in big-endian order and return a result of the same type as the left operand. The right operand should be less than 32, but if not only the low five bits will be used . Or from the ECMAscript 5 standard : The production

Why is the output -33 for this code snippet

大兔子大兔子 提交于 2019-12-02 01:43:31
#include<stdio.h> int main() { int a=32; printf("%d\n", ~a); //line 2 return 0; } o/p = -33 Actually in the original snippet line 2 was printf("%x\n", ~a); //line 2 I solved it like 32 in hex is 20. 0000 0000 0010 0000 now tilde operator complements it 1111 1111 1101 1111 = ffdf. I am confused how to solve it when I have printf("%d\n", ~a); //line 2 i.e %d NOT %x. In your C implementation, as in most modern implementations of any programming language, signed integers are represented with two’s complement . In two’s complement, the high bit indicates a negative number, and the values are

Fast way to remove bits from a ulong

ぃ、小莉子 提交于 2019-12-02 01:35:58
I want to remove bits from a 64 bit string (represented by a unsigned long). I could do this with a sequence of mask and shift operations, or iterate over each bit as in the code below. Is there some clever bit-twiddling method to make this perform quicker? public ulong RemoveBits(ulong input, ulong mask) { ulong result = 0; ulong readbit = 1; ulong writebit =1; for (int i = 0; i < 64; i++) { if ((mask & readbit) == 0) //0 in the mask means retain that bit { if ((input & readbit) > 0) { result+= writebit; } writebit*=2; } readbit *= 2; } return result; } I need to perform RemoveBits millions