Why do we prefer not to specify the constant factor in Big-O notation?

泄露秘密 提交于 2019-12-04 06:25:41

问题


Let's consider classic big O notation definition (proof link):

O(f(n)) is the set of all functions such that there exist positive constants C and n0 with |g(n)| ≤ C * f(n), for all n ≥ n_0.

According to this definition it is legal to do the following (g1 and g2 are the functions that describe two algorithms complexity):

g1(n) = 9999 * n^2 + n ∈ O(9999 * n^2)

g2(n) = 5 * n^2 + N ∈ O(5 * n^2)

And it is also legal to note functions as:

g1(n) = 9999 * N^2 + N ∈ O(n^2)

g2(n) = 5 * N^2 + N ∈ O(n^2)

As you can see the first variant O(9999*N^2) vs (5*N^2) is much more precise and gives us clear vision which algorithm is faster. The second one does not show us anything.

The question is: why nobody use the first variant?


回答1:


The use of the O() notation is, from the get go, the opposite of noting something "precisely". The very idea is to mask "precise" differences between algorithms, as well as being able to ignore the effect of computing hardware specifics and the choice of compiler or programming language. Indeed, g_1(n) and g_2(n) are both in the same class (or set) of functions of n - the class O(n^2). They differ in specifics, but they are similar enough.

The fact that it's a class is why I edited your question and corrected the notation from = O(9999 * N^2) to ∈ O(9999 * N^2).

By the way - I believe your question would have been a better fit on cs.stackexchange.com.



来源:https://stackoverflow.com/questions/52952051/why-do-we-prefer-not-to-specify-the-constant-factor-in-big-o-notation

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!