bit-manipulation

Type conversion warning after bitwise operations in C

荒凉一梦 提交于 2019-12-05 15:37:43
问题 How do you explain that line 7 gets a warning, but not line 5 or line 6? int main() { unsigned char a = 0xFF; unsigned char b = 0xFF; a = a | b; // 5: (no warning) a = (unsigned char)(b & 0xF); // 6: (no warning) a = a | (unsigned char)(b & 0xF); // 7: (warning) return 0; } GCC 4.6.2 output when compiled on 32-bit architecture (Windows PC): gcc -c main.c --std=c89 -Wall -Wextra -Wconversion -pedantic main.c: In function 'main': main.c:7:11: warning: conversion to 'unsigned char' from 'int'

Are enums the canonical way to implement bit flags?

放肆的年华 提交于 2019-12-05 15:30:25
问题 Currently I'm using enums to represent a state in a little game experiment. I declare them like so: namespace State { enum Value { MoveUp = 1 << 0, // 00001 == 1 MoveDown = 1 << 1, // 00010 == 2 MoveLeft = 1 << 2, // 00100 == 4 MoveRight = 1 << 3, // 01000 == 8 Still = 1 << 4, // 10000 == 16 Jump = 1 << 5 }; } So that I can use them this way: State::Value state = State::Value(0); state = State::Value(state | State::MoveUp); if (mState & State::MoveUp) movement.y -= mPlayerSpeed; But I'm

Increase a double to the next closest value?

旧城冷巷雨未停 提交于 2019-12-05 14:20:13
This isn't a question for a real-life project; I'm only curious. We can increase an int using the increment operator ( i++ ). You can define this operation as: This increases the variable with the closest value to i . Which is in this case simply +1. But I was thinking of defining the number of double values available in a specific range according the IEEE 754-2008 system. I would be able to set up a graph which demonstrates these amounts in some ranges and see how it is decreasing. I guess there should be a bitwise way of increasing a double to the closest value greater than the original

The structure of Deflate compressed block

☆樱花仙子☆ 提交于 2019-12-05 14:13:08
I have troubles with understanding Deflate algorithm ( RFC 1951 ). TL; DR How to parse Deflate compressed block 4be4 0200 ? I created a file with a letter and newline a\n in it, and run gzip a.txt . Resultant file a.txt.gz : 1f8b 0808 fe8b eb55 0003 612e 7478 7400 4be4 0200 07a1 eadd 0200 0000 I understand that first line is header with additional information, and last line is CRC32 plus size of input ( RFC 1951 ). These two gives no trouble to me. But how do I interpret the compressed block itself (the middle line)? Here's hexadecimal and binary representation of it: 4be4 0200 0100 1011 1110

Bitwise operation on a floating point usefulness

拜拜、爱过 提交于 2019-12-05 13:03:06
I noticed a SSE instruction existed for floating point AND which got me wondering. You can do the same thing with scalars in fp/integer union. The idea struck me that, if you bitwise OR the components of an array of floats, you can determine if any of them are negative quickly by looking at the sign bit of the result. What other uses exist for bitwise operations on floating point values? A lot. For example when you only need to do bitwise operations on a floating-point only instruction set like AVX, then those become very handy. Another application: making constants. You can see a lot of

Which is the most efficient way to extract an arbitrary range of bits from a contiguous sequence of words?

依然范特西╮ 提交于 2019-12-05 12:39:39
Suppose we have an std::vector , or any other sequence container (sometimes it will be a deque), which store uint64_t elements. Now, let's see this vector as a sequence of size() * 64 contiguous bits. I need to find the word formed by the bits in a given [begin, end) range, given that end - begin <= 64 so it fits in a word. The solution I have right now finds the two words whose parts will form the result, and separately masks and combines them. Since I need this to be as efficient as possible, I've tried to code everything without any if branch to not cause branch mispredictions, so for

What are good methods for hashing bits in an Int32 or UInt32?

狂风中的少年 提交于 2019-12-05 12:28:10
I have an implementation of a pseudo random number generator, specifically of George Marsaglia's XOR-Shift RNG. My implementation is here: FastRandom.cs It turns out that the first random sample is very closely correlated with the seed, which is fairly obvious if you take a look at the Reinitialise(int seed) method. This is bad. My proposed solution is to mix up the bits of the seed as follows: _x = (uint)( (seed * 2147483647) ^ ((seed << 16 | seed >> 48) * 28111) ^ ((seed << 32 | seed >> 32) * 69001) ^ ((seed << 48 | seed >> 16) * 45083)); So I have significantly weakened any correlation by

Convert 4 bytes to an unsigned 32-bit integer and storing it in a long

一个人想着一个人 提交于 2019-12-05 11:52:47
I'm trying to read a binary file in Java. I need methods to read unsigned 8-bit values, unsigned 16-bit value and unsigned 32-bit values. What would be the best (fastest, nicest looking code) to do this? I've done this in c++ and did something like this: uint8_t *buffer; uint32_t value = buffer[0] | buffer[1] << 8 | buffer[2] << 16 | buffer[3] << 24; But in Java this causes a problem if for example buffer[1] contains a value which has it sign bit set as the result of a left-shift is an int (?). Instead of OR:ing in only 0xA5 at the specific place it OR:s in 0xFFFFA500 or something like that,

'memcpy'-like function that supports offsets by individual bits?

安稳与你 提交于 2019-12-05 11:51:10
I was thinking about solving this, but it's looking to be quite a task. If I take this one by myself, I'll likely write it several different ways and pick the best, so I thought I'd ask this question to see if there's a good library that solves this already or if anyone has thoughts/advice. void OffsetMemCpy(u8* pDest, u8* pSrc, u8 srcBitOffset, size size) { // Or something along these lines. srcBitOffset is 0-7, so the pSrc buffer // needs to be up to one byte longer than it would need to be in memcpy. // Maybe explicitly providing the end of the buffer is best. // Also note that pSrc has NO

Do bitwise operations distribute over addition?

时间秒杀一切 提交于 2019-12-05 11:34:49
I'm looking at an algorithm I'm trying to optimize, and it's basically a lot of bit twiddling, followed by some additions in a tight feedback. If I could use carry-save addition for the adders, it would really help me speed things up, but I'm not sure if I can distribute the operations over the addition. Specifically if I represent: a = sa+ca (state + carry) b = sb+cb can I represent (a >>> r) in terms of s and c? How about a | b and a & b? Think about it... sa = 1 ca = 1 sb = 1 cb = 1 a = sa + ca = 2 b = sb + cb = 2 (a | b) = 2 (a & b) = 2 (sa | sb) + (ca | cb) = (1 | 1) + (1 | 1) = 1 + 1 = 2