C code loop performance

后端 未结 3 1377
面向向阳花
面向向阳花 2020-12-12 12:55

I have a multiply-add kernel inside my application and I want to increase its performance.

I use an Intel Core i7-960 (3.2 GHz clock) and have already manually impl

相关标签:
3条回答
  • 2020-12-12 13:34

    Thanks a lot for your answers, this explained a lot. Continuing on my question, when i use packed instructions instead of scalar instructions the code using intrinsics would look very similar:

    for(int i=0; i<size; i+=16) {
        y1 = _mm_load_ps(output[i]);
        …
        y4 = _mm_load_ps(output[i+12]);
    
        for(k=0; k<ksize; k++){
            for(l=0; l<ksize; l++){
                w  = _mm_set_ps1(weight[i+k+l]);
    
                x1 = _mm_load_ps(input[i+k+l]);
                y1 = _mm_add_ps(y1,_mm_mul_ps(w,x1));
                …
                x4 = _mm_load_ps(input[i+k+l+12]);
                y4 = _mm_add_ps(y4,_mm_mul_ps(w,x4));
            }
        }
        _mm_store_ps(&output[i],y1);
        …
        _mm_store_ps(&output[i+12],y4);
        }
    

    The measured performance of this kernel is about 5.6 FP operations per cycle, although i would expect it to be exactly 4x the performance of the scalar version, i.e. 4.1,6=6,4 FP ops per cycle.

    Taking the move of the weight factor into account (thanks for pointing that out), the schedule looks like:

    schedule

    It looks like the schedule doesn't change, although there is an extra instruction after the movss operation that moves the scalar weight value to the XMM register and then uses shufps to copy this scalar value in the entire vector. It seems like the weight vector is ready to be used for the mulps in time taking the switching latency from load to the floating point domain into account, so this shouldn't incur any extra latency.

    The movaps (aligned, packed move),addps & mulps instructions that are used in this kernel (checked with assembly code) have the same latency & throughput as their scalar versions, so this shouldn't incur any extra latency either.

    Does anybody have an idea where this extra cycle per 8 cycles is spent on, assuming the maximum performance this kernel can get is 6.4 FP ops per cycle and it is running at 5.6 FP ops per cycle?

    Thanks again for all of your help!

    0 讨论(0)
  • 2020-12-12 13:35

    I noticed in the comments that:

    • The loop takes 5 cycles to execute.
    • It's "supposed" to take 4 cycles. (since there's 4 adds and 4 mulitplies)

    However, your assembly shows 5 SSE movssl instructions. According to Agner Fog's tables all floating-point SSE move instructions are at least 1 inst/cycle reciprocal throughput for Nehalem.

    Since you have 5 of them, you can't do better than 5 cycles/iteration.


    So in order to get to peak performance, you need to reduce the # of loads that you have. How you can do that I can't see immediately this particular case - but it might be possible.

    One common approach is to use tiling. Where you add nesting levels to improve locality. Although it's used mostly for improving cache access, it can also be used in registers to reduce the # of load/stores that are needed.

    Ultimately, your goal is to reduce the number of loads to be less than the numbers of add/muls. So this might be the way to go.

    0 讨论(0)
  • 2020-12-12 13:45

    Making this an answer from my comment.

    On a non-server Linux distro I believe the interrupt timer is usually set to 250Hz by default, though that varies by distro it's almost always over 150. That speed is necessary to provide a 30+fps interactive GUI. That interrupt timer is used to preempt code. That means 150+ times per second your code is interrupted and the scheduler code runs and decides what to give more time to. It sounds like you're doing great to simply get 80% of max speed, no problems there. If you need better install say, Ubuntu Server (100Hz default) and tweak the kernel (preemption off) a bit

    EDIT: On a 2+ core system this has much less impact as your process will almost definitely be slapped onto one core and more-or-less left to do its own thing.

    0 讨论(0)
提交回复
热议问题