I have a multiply-add kernel inside my application and I want to increase its performance.
I use an Intel Core i7-960 (3.2 GHz clock) and have already manually impl
Thanks a lot for your answers, this explained a lot. Continuing on my question, when i use packed instructions instead of scalar instructions the code using intrinsics would look very similar:
for(int i=0; i
The measured performance of this kernel is about 5.6 FP operations per cycle, although i would expect it to be exactly 4x the performance of the scalar version, i.e. 4.1,6=6,4 FP ops per cycle.
Taking the move of the weight factor into account (thanks for pointing that out), the schedule looks like:
It looks like the schedule doesn't change, although there is an extra instruction after the movss
operation that moves the scalar weight value to the XMM register and then uses shufps
to copy this scalar value in the entire vector. It seems like the weight vector is ready to be used for the mulps
in time taking the switching latency from load to the floating point domain into account, so this shouldn't incur any extra latency.
The movaps
(aligned, packed move),addps
& mulps
instructions that are used in this kernel (checked with assembly code) have the same latency & throughput as their scalar versions, so this shouldn't incur any extra latency either.
Does anybody have an idea where this extra cycle per 8 cycles is spent on, assuming the maximum performance this kernel can get is 6.4 FP ops per cycle and it is running at 5.6 FP ops per cycle?
Thanks again for all of your help!