Is a bit field any more efficient (computationally) than masking bits and extracting the data by hand?

人盡茶涼 提交于 2019-12-03 07:00:47

The compiler generates the same instructions that you would explicitly write to access the bits. So don't expect it to be faster with bitfields.

In fact, strictly speaking with bitfields you don't control how they are positioned in the word of data (unless your compiler gives you some additional guarantees. I mean that the C99 standard doesn't define any). Doing masks by hand, you can at least place the two most often accessed fields first and last in the series, because in these two positions, it takes one operation instead of two to isolate the field.

Those will probably compile the the same machine code, but if it really matters, benchmark it. Or, better yet, just use the bitfield because it's easier!

Quickly testing gcc yields:

shrq    $6, %rdi             ; using bit field
movl    %edi, %eax
andl    $31, %eax

vs.

andl    $130023424, %edi     ; by-hand
shrl    $21, %edi
movl    %edi, %eax

This is a little-endian machine, so the numbers are different, but the three instructions are nearly same.

In this examle I would use the bit field manually.
But not because of accesses. But because of the ability to compare two dt's.
In the end the compiler will always generate better code than you (as the compiler will get better over time and never make mistakes) but this code is simple enough that you will probably write optimum code (but this is the kind of micro optimization you should not be worrying about).

If your dt is an integer formatted as:

yyyyyyyyyyyy|mmmm|ddddd|hhhhh|mmmmmm

Then you can naturally compare them like this.

dt t1(getTimeStamp());
dt t2(getTimeStamp());

if (t1 < t2)
{    std::cout << "T1 happened before T2\n";
}

By using a bit field structure the code looks like this:

dt t1(getTimeStamp());
dt t2(getTimeStamp());

if (convertToInt(t1) < convertToInt(t2))
{    std::cout << "T1 happened before T2\n";
}
// or
if ((t1.year < t2.year)
    || ((t1.year == t2.year) && ((t1.month < t2.month)
      || ((t1.month == t2.month) && ((t1.day < t2.day)
        || ((t1.day == t2.day) && (t1.hour  etc.....

Of course you could get the best of both worlds by using a union that has the structure on one side and the int as the alternative. Obviously this will depend exactly on how your compiler works and you will need to test that the objects are getting placed in the correct positions (but this would be perfect place to learn about TDD.

It is entirely platform and compiler dependent. Some processors, especially microcontrollers, have bit addressing instructions or bit addressable memory, and the compiler can use these directly if you use built-in language constructs. If you use bit-masking to operate on bits on such a processor, the compiler will have to be smarter to spot the potential optimisation.

On most desktop platforms I would suggest that you are sweating the small stuff, but if you need to know, you should test it by profiling or timing the code, or analyse the generated code. Note that you may get very different results depending on compiler optimisation options, and even different compilers.

Only if your architecture explicitly has a set of instructions for bit-wise manipulation and access.

Depends. If you use bit fields, then you let the compiler worry about how to store the data (bitfields are pretty much completely implementation-defined), which means that:

  • It might use more space than necessary, and
  • Accessing each member will be done efficiently.

The compiler will typically organize the layout of the struct so that the second assumption holds, at the cost of total size of the struct.

The compiler will probably insert padding between each member, to ease access to each field.

On the other hand, if you just store everything in a single unsigned long (or an array of chars), then it's up to you to implement efficient access, but you have a guarantee of the layout. It will take up a fixed size, and there will be no padding. And that means that copying the value around may get less expensive. And it'll be more portable (assuming you use a fixed-size int type instead of just unsigned int).

The compiler can sometimes combine the access to the bitfields in a non intuitive matter. I once disassembled the code generated (gcc 3.4.6 for sparc) when accessing 1 bit entries that where used in a conditionnal expressions. The compiler fused the access to the bits and made the comparison with integers. I will try to reproduce the idea (not at work and can not access the source code that was involved):

struct bits {
  int b1:1;
  int b2:1;
  int b3:1;
  ...
} x;

if(x.b1 && x.b2 && !x.b3)
...
if(x.b2 && !x.b2 && x.b3)

was compiled to something equivalent to (I know the bitorder in my example is the opposite but it is only for the sake of simplification of the example).

temp = (x & 7);
if( temp == 6)
...
if( temp == 5)

There's also another point to consider if one wants to use bitfields (they are often more readable than bit-kungfu), if you have some bits to spare, it can be useful to reserve whole bytes for certain fields, thus simplifying access by the processor. An 8 bits field that is aligned can be loaded with a move byte command and doesn't need the masking step.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!