What is the time complexity of this multiplication algorithm?

与世无争的帅哥 提交于 2019-12-08 02:36:25

问题


For the classic interview question "How do you perform integer multiplication without the multiplication operator?", the easiest answer is, of course, the following linear-time algorithm in C:

int mult(int multiplicand, int multiplier)
{
    for (int i = 1; i < multiplier; i++)
    {
        multiplicand += multiplicand;
    }

    return multiplicand;
}

Of course, there is a faster algorithm. If we take advantage of the property that bit shifting to the left is equivalent to multiplying by 2 to the power of the number of bits shifted, we can bit-shift up to the nearest power of 2, and use our previous algorithm to add up from there. So, our code would now look something like this:

#include <math.h>

int log2( double n )
{
    return log(n) / log(2);
}

int mult(int multiplicand, int multiplier)
{
    int nearest_power = 2 ^ (floor(log2(multiplier)));
    multiplicand << nearest_power;
    for (int i = nearest_power; i < multiplier; i++)
    {
        multiplicand += multiplicand;
    }

    return multiplicand;
}

I'm having trouble determining what the time complexity of this algorithm is. I don't believe that O(n - 2^(floor(log2(n)))) is the correct way to express this, although (I think?) it's technically correct. Can anyone provide some insight on this?


回答1:


mulitplier - nearest_power can be as large as half of multiplier, and as it tends towards infinity the constant 0.5 there doesn't matter (not to mention we get rid of constants in Big O). The loop is therefore O(multiplier). I'm not sure about the bit-shifting.

Edit: I took more of a look around on the bit-shifting. As gbulmer says, it can be O(n), where n is the number of bits shifted. However, it can also be O(1) on certain architectures. See: Is bit shifting O(1) or O(n)?

However, it doesn't matter in this case! n > log2(n) for all valid n. So we have O(n) + O(multiplier) which is a subset of O(2*multiplier) due to the aforementioned relationship, and thus the whole algorithm is O(multiplier).




回答2:


The point of finding the nearest power is so that your function runtime could get close to runtime O(1). This happens when 2^nearest_power is very close to the result of your addition.

Behind the scenes the whole "to the power of 2" is done with bit shifting.

So, to answer your question, the second version of your code is still worse case linear time: O(multiplier).
Your answer, O(n - 2^(floor(log2(n)))), is also not incorrect; it's just very precise and might be hard to do in your head quickly to find the bounds.




回答3:


Edit

Let's look at the second posted algorithm, starting with:

int nearest_power = 2 ^ (floor(log2(multiplier)));

I believe calculating log2, is, rather pleasingly, O(log2(multiplier))

then nearest_power gets to the interval [multiplier/2 to multiplier], the magnitude of this is multiplier/2. This is the same as finding the highest set-bit for a positive number.

So the for loop is O(multiplier/2), the constant of 1/2 comes out, so it is O(n)

On average, it is half the interval away, which would be O(multiplier/4). But that is just the constant 1/4 * n, so it is still O(n), the constant is smaller but it is still O(n).

A faster algorithm.

Our intuitiion is we can multiply by an n digit number in n steps

In binary this is using 1-bit shift, 1-bit test and binary add to construct the whole answer. Each of those operations is O(1). This is long-multiplication, one digit at a time.

If we use O(1) operations for n, an x bit number, it is O(log2(n)) or O(x), where x is the number of bits in the number

This is an O(log2(n)) algorithm:

int mult(int multiplicand, int multiplier) {
    int product = 0;

    while (multiplier) {
        if (multiplier & 1) product += multiplicand;
        multiplicand <<= 1;
        multiplier >>= 1;
    }

    return product;
}

It is essentially how we do long multiplication.

Of course, the wise thing to do is use the smaller number as the multiplier. (I'll leave that as an exercise for the reader :-)

This only works for positive values, but by testing and remembering the signs of the input, operating on positive values, and then adjusting the sign, it works for all numbers.



来源:https://stackoverflow.com/questions/9844282/what-is-the-time-complexity-of-this-multiplication-algorithm

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!