Can someone explain how Big-Oh works with Summations?

假装没事ソ 提交于 2019-12-01 03:42:40

My guess is that what the question statement means is if you're summing the results of some calculation for which the running time is proportional to i2 in the first case, and proportional to log2i in the second case. In both cases, the running time of the overall summation is "dominated" by the larger values of N within the summation, and thus the overall big-O evaluation of both will be N*O(f) where f is the function you're summing.

http://en.wikipedia.org/wiki/Big_O_notation

N repetitions of g(m)=O(f(m)) is O(N*f(m)) for any f(N).

Sum of i=1..N of i*g(i) is O(N*f(N)) if g(n)=O(f(n)) and f is monotonic.

Definition: g(n)=O(f(n)) if some c,m exist so for all n>m, g(n)<=c*f(n)

The sum is for i=1..N of i*f(i).

If f is monotonic in i this means every term is <= if(N) <= Nf(N). So the sum is less than N*c*f(N) so the sum is O(N*f(N)) (witnessed by the same c,m that makes g(n)=O(f(n)))

Of course, log_2(x) and x^2 are both monotonic.

  • Σ (i=1 to n) i2 = n(n+1)(2n+1)/6, which is O(n3).

  • Note that (n!)2 = (1 n) (2(n-1)) (3(n-2))...((n-1)2) (n 1)
    = Π (i=1 to n) i (n+1-i)
    >= Π (i=1 to n) n
        [E.g., because for each i=1 to n, (i-1)(n-i) >= 0. Compare with Graham/Knuth/Patashnik, section 4.4]
    = nn.
    Thus, n! >= nn/2, and therefore
    Σ (i=1 to n) log i = log Π (i=1 to n) i = log n! >= log nn/2 = (n/2) log n, which is Ω(n log n).

JSchlather

The simplest approach that jumps out to me is a proof by induction.

For the first one, essentially you need to show that

sum (i=1 to n) i^2 < k*n^3, k > 2,n > 0

If we use the generalized principle of induction and take a base case of n=1 and k=2.

we get 1<2*1.

Now of course take the inductive hypothesis, then we know that

sum(i=1 to n) i^2<k *n^3, with a bit of fun math we get to

sum(i=1 to n) i^2+(n+1)^2 < k *n^3+(n+1)^2.

  • Now show k * n^3+(n+1)^2 < k *(n+1)^3

  • k *n^3+n^2+2n+1 < k *n^3+k *(3n^2+3n+1)

  • k *n^3 < k *n^3+(3k-1) *n^2+(3k-2) *n+k-1

Therefore, our original result is correct.

For the second proof you need to show that sum(i=1 to n) log_2(i) >= k*n*log(n), which I'll leave as an exercise for the reader ;).

The main step though is sum(i=1 to n) log_2(i)+log_2(n+1)>=k*n*log(n)+k*log(n+1), for some k, so clearly k is 1.

Probably, your CPU will multiply 32-bit integers in constant time. But big-Oh doesn't care about "less than four billion", so maybe you have to look at multiplication algorithms?

According to Wolfram, the "traditional" multiplication algorithm is O(n2). Although n in this case is the number of digits, and thus really log(n) in the actual number. So I should be able to square the integers 1..n in time O(n.log(n)). Summing is O(log(n2)), which is obviously O(log(n)), for an overall complexity of O(n.log(n)).

So I can quite understand your confusion.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!