What has a better performance: multiplication or division?

我的未来我决定 提交于 2019-11-29 11:22:33
Rahul Tripathi

Well if it is a single calculation you wil hardly notice any difference but if you talk about millions of transaction then definitely Division is costlier than Multiplication. You can always use whatever is the clearest and readable.

Please refer this link:- Should I use multiplication or division?

Usually division is a lot more expensive than multiplication, but a smart compiler will often convert division by a compile-time constant to a multiplication anyway. If your compiler is not smart enough though, or if there are floating point accuracy issues, then you can always do the optimisation explicitly, e.g. change:

 float x = y / 2.5f;

to:

 const float k = 1.0f / 2.5f;

 ...

 float x = y * k;

Note that this is most likely a case of premature optimisation - you should only do this kind of thing if you have profiled your code and positively identified division as being a performance bottlneck.

Division by a compile-time constant that's a power of 2 is quite fast (comparable to multiplication by a compile-time constant) for both integers and floats (it's basically convertible into a bit shift).

For floats even dynamic division by powers of two is much faster than regular (dynamic or static division) as it basically turns into a subtraction on its exponent.

In all other cases, division appears to be several times slower than multiplication.

For dynamic divisor the slowndown factor at my Intel(R) Core(TM) i5 CPU M 430 @ 2.27GHz appears to be about 8, for static ones about 2.

The results are from a little benchmark of mine, which I made because I was somewhat curious about this (notice the aberrations at powers of two) :

  • ulong -- 64 bit unsigned
  • 1 in the label means dynamic argument
  • 0 in the lable means statically known argument

The results were generated from the following bash template:

#include <stdio.h>
#include <stdlib.h>
typedef unsigned long ulong;
int main(int argc, char** argv){
    $TYPE arg = atoi(argv[1]);
    $TYPE i = 0, res = 0;
    for (i=0;i< $IT;i++)
        res+=i $OP $ARG;
    printf($FMT, res);
    return 0;
}

with the $-variables assigned and the resulting program compiled with -O3 and run (dynamic values came from the command line as it's obvious from the C code).

That will likely depend on your specific CPU and the types of your arguments. For instance, in your example you're doing a floating-point multiplication but an integer division. (Probably, at least, in most languages I know of that use C syntax.)

If you are doing work in assembler, you can look up the specific instructions you are using and see how long they take.

If you are not doing work in assembler, you probably don't need to care. All modern compilers with optimization will change your operations in this way to the most appropriate instructions.

Your big wins on optimization will not be from twiddling the arithmetic like this. Instead, focus on how well you are using your cache. Consider whether there are algorithm changes that might speed things up.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!