Okay so I have this project I have to do, but I just don\'t understand it. The thing is, I have 2 algorithms. O(n^2) and O(n*log2n)
Good question. Actually, I always show these 3 pictures:
n = [0; 10]
n = [0; 100]
n = [0; 1000]
So, O(N*log(N))
is far better than O(N^2)
. It is much closer to O(N)
than to O(N^2)
.
But your O(N^2)
algorithm is faster for N < 100
in real life. There are a lot of reasons why it can be faster. Maybe due to better memory allocation or other "non-algorithmic" effects. Maybe O(N*log(N))
algorithm requires some data preparation phase or O(N^2)
iterations are shorter. Anyway, Big-O notation is only appropriate in case of large enough Ns.
If you want to demonstrate why one algorithm is faster for small Ns, you can measure execution time of 1 iteration and constant overhead for both algorithms, then use them to correct theoretical plot:
Example
Or just measure execution time of both algorithms for different Ns
and plot empirical data.
Big-O notation is a notation of asymptotic complexity. This means it calculates the complexity when N is arbitrarily large.
For small Ns, a lot of other factors come in. It's possible that an algorithm has O(n^2) loop iterations, but each iteration is very short, while another algorithm has O(n) iterations with very long iterations. With large Ns, the linear algorithm will be faster. With small Ns, the quadratic algorithm will be faster.
So, for small Ns, just measure the two and see which one is faster. No need to go into asymptotic complexity.
Incidentally, don't write the basis of the log. Big-O notation ignores constants - O(17 * N) is the same as O(N). Since log2N is just ln N / ln 2
, the basis of the logarithm is just another constant and is ignored.
First, it is not quite correct to compare asymptotic complexity mixed with N constraint. I.E., I can state:
O(n^2)
is slower than O(n * log(n))
, because the definition of Big O notation will include n is growing infinitely
.
For particular N
it is possible to say which algorithm is faster by simply comparing N^2 * ALGORITHM_CONSTANT
and N * log(N) * ALGORITHM_CONSTANT
, where ALGORITHM_CONSTANT
depends on the algorithm. For example, if we traverse array twice to do our job, asymptotic complexity will be O(N)
and ALGORITHM_CONSTANT
will be 2
.
Also I'd like to mention that O(N * log2N)
which I assume logariphm on basis 2
(log2N) is actually the same as O(N * log(N))
because of logariphm properties.
Just ask wolframalpha if you have doubts.
In this case, it says
n log(n)
lim --------- = 0
n^2
Or you can also calculate the limit yourself:
n log(n) log(n) (Hôpital) 1/n 1
lim --------- = lim -------- = lim ------- = lim --- = 0
n^2 n 1 n
That means n^2
grows faster, so n log(n)
is smaller (better), when n
is high enough.
Let's compare them,
On one hand we have:
n^2 = n * n
On the other hand we have:
nlogn = n * log(n)
Putting them side to side:
n * n versus n * log(n)
Let's divide by n
which is a common term, to get:
n versus log(n)
Let's compare values:
n = 10 log(n) ~ 2.3
n = 100 log(n) ~ 4.6
n = 1,000 log(n) ~ 6.9
n = 10,000 log(n) ~ 9.21
n = 100,000 log(n) ~ 11.5
n = 1,000,000 log(n) ~ 13.8
So we have:
n >> log(n) for n > 1
n^2 >> n log(n) for n > 1
We have two way to compare two Algo ->first way is very simple compare and apply limit
T1(n)-Algo1
T2(n)=Alog2
lim (n->infinite) T1(n)/T2(n)=m
(i)if m=0 Algo1 is faster than Algo2
(ii)m=k Both are same
(iii)m=infinite Algo2 is faster
*Second way pretty simple as compare to 1st there you just take a log of both but do not neglet multi constant
Algo 1=log n
Algo 2=sqr(n)
keep log n =x
Any poly>its log
O(sqr(n))>o(logn)