GCC optimization trick, does it really work?

纵然是瞬间 提交于 2019-12-04 03:25:37

The example given in that answer was not a very good one because of the call to an unknown function the compiler cannot reason much about. Here's a better example:

void FillOneA(int *array, int length, int& startIndex)
{
    for (int i = 0; i < length; i++) array[startIndex + i] = 1;
}

void FillOneB(int *array, int length, int& startIndex)
{
    int localIndex = startIndex;
    for (int i = 0; i < length; i++) array[localIndex + i] = 1;
}

The first version optimizes poorly because it needs to protect against the possibility that somebody called it as

int array[10] = { 0 };
FillOneA(array, 5, array[1]);

resulting in {1, 1, 0, 1, 1, 1, 0, 0, 0, 0 } since the iteration with i=1 modifies the startIndex parameter.

The second one doesn't need to worry about the possibility that the array[localIndex + i] = 1 will modify localIndex because localIndex is a local variable whose address has never been taken.

In assembly (Intel notation, because that's what I use):

FillOneA:
    mov     edx, [esp+8]
    xor     eax, eax
    test    edx, edx
    jle     $b
    push    esi
    mov     esi, [esp+16]
    push    edi
    mov     edi, [esp+12]
$a: mov     ecx, [esi]
    add     ecx, eax
    inc     eax
    mov     [edi+ecx*4], 1
    cmp     eax, edx
    jl      $a
    pop     edi
    pop     esi
$b: ret

FillOneB:
    mov     ecx, [esp+8]
    mov     eax, [esp+12]
    mov     edx, [eax]
    test    ecx, ecx
    jle     $a
    mov     eax, [esp+4]
    push    edi
    lea     edi, [eax+edx*4]
    mov     eax, 1
    rep stosd
    pop     edi
$a: ret

ADDED: Here's an example where the compiler's insight is into Bar, and not munge:

class Bar
{
public:
    float getValue() const
    {
        return valueBase * boost;
    }

private:
    float valueBase;
    float boost;
};

class Foo
{
public:
    void munge(float adjustment);
};

void Adjust10A(Foo& foo, const Bar& bar)
{
    for (int i = 0; i < 10; i++)
        foo.munge(bar.getValue());
}

void Adjust10B(Foo& foo, const Bar& bar)
{
    Bar localBar = bar;
    for (int i = 0; i < 10; i++)
        foo.munge(localBar.getValue());
}

The resulting code is

Adjust10A:
    push    ecx
    push    ebx
    mov     ebx, [esp+12] ;; foo
    push    esi
    mov     esi, [esp+20] ;; bar
    push    edi
    mov     edi, 10
$a: fld     [esi+4] ;; bar.valueBase
    push    ecx
    fmul    [esi] ;; valueBase * boost
    mov     ecx, ebx
    fstp    [esp+16]
    fld     [esp+16]
    fstp    [esp]
    call    Foo::munge
    dec     edi
    jne     $a
    pop     edi
    pop     esi
    pop     ebx
    pop     ecx
    ret     0

Adjust10B:
    sub     esp, 8
    mov     ecx, [esp+16] ;; bar
    mov     eax, [ecx] ;; bar.valueBase
    mov     [esp], eax ;; localBar.valueBase
    fld     [esp] ;; localBar.valueBase
    mov     eax, [ecx+4] ;; bar.boost
    mov     [esp+4], eax ;; localBar.boost
    fmul    [esp+4] ;; localBar.getValue()
    push    esi
    push    edi
    mov     edi, [esp+20] ;; foo
    fstp    [esp+24]
    fld     [esp+24] ;; cache localBar.getValue()
    mov     esi, 10 ;; loop counter
$a: push    ecx
    mov     ecx, edi ;; foo
    fstp    [esp] ;; use cached value
    call    Foo::munge
    fld     [esp]
    dec     esi
    jne     $a ;; loop
    pop     edi
    fstp    ST(0)
    pop     esi
    add     esp, 8
    ret     0

Observe that the inner loop in Adjust10A must recalculate the value since it must protect against the possibility that foo.munge changed bar.

That said, this style of optimization is not a slam dunk. (For example, we could've gotten the same effect by manually caching bar.getValue() into localValue.) It tends to be most helpful for vectorized operations, since those can be paralellized.

First, I'm going to assume munge() cannot be inlined - that is, its definition is not in the same translation unit; you haven't provided complete source, so I can't be entirely sure, but it would explain these results.

Since foo1 is passed to munge as a reference, at the implementation level, the compiler just passes a pointer. If we just forward our argument, this is nice and fast - any aliasing issues are munge()'s problem - and have to be, since munge() can't assume anything about its arguments, and we can't assume anything about what munge() might do with them (since munge()'s definition is not available).

However, if we copy to a local variable, we must copy to a local variable and pass a pointer to the local variable. This is because munge() can observe a difference in behavior - if it takes the pointer to its first argument, it can see it's not equal to &foo1. Since munge()'s implementation is not in scope, the compiler can't assume it won't do this.

This local-variable-copy trick here thus ends up pessimizing, rather than optimizing - the optimizations it tries to help are not possible, because munge() cannot be inlined; for the same reason, the local variable actively hurts performance.

It would be instructive to try this again, making sure munge() is non-virtual and available as an inlinable function.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!