Algorithms in O(n^2) vs O(n) [duplicate]

╄→尐↘猪︶ㄣ 提交于 2021-02-07 09:22:40

问题


I'm new to computer science and just started with pseudocodes and I have some questions. It's my third week in the semester and majority is self-studying. I have some questions:

What is the difference of an O(n^2) with an O(n) algorithm? - Similarly, what is O(n log n)? - and Ω(n^2)?

So far, I have written:

horner = 0;
for( i = n; i >= 0; i −− )
    horner = x * horner + a[i];

But found out it is O(n). How do I transform it?

What is the run time? - I know assignment on first line is 1 operation

And how does it look like in an actual, say C#, algorithm?


回答1:


What you are asking about is a topic in computer science known as Algorithm Complexity Analysis. It is a very important topic to consider when writing algorithms, or solutions, in your programs because it relates to run-time, or how fast your computations will run.

Big-Oh, or O(n) relates to the upper-bound run-time for an algorithm to run. In this case, O(n) means for n-elements there will need to be all n-elements considered for the algorithms computation to complete, or linear. The range for this Big-Oh complexity is from constant time, O(1), to up O(n^n) for really large, and very slow computations. Also, consider equations such as the following:

y=10n+1
y=5n+10

These both are O(n) complexity because as the number of elements grow, the equation grows larger and larger because of it. We neglect the constants because the equation will grow larger and fast due to the variable, rather than due to the constant values which never change. Whereas, with equation as the following:

y=10n^2+5n+5 

The complexity is going to be O(n^2) due to the 10n^2 being the largest growing element contributing to the equation growing faster. We drop the constants and consider the largest growing components to equations to evaluate complexity.

For Big-Omega complexity, we consider this the lower bound for Algorithm Complexity Analysis. For example, an algorithm can run as quick as Omega(n) (best case) but take as long as O(n^2) (worst case) this is common in analysis of sorting algorithms or searching algorithms.

In some cases, we want to write programs with algorithms that are efficient and faster for optimization reasons, especially when we want a quicker program for faster solutions or quicker run times.

The code example you provided is O(n) because it uses a for loop to iterate over n-elements. Consider a double for loop, where there is a second loop within your current for loop. This is O(n^2) due to iterating over, in the worst case, n*n elements.

Java pseudo-code for a O(n^2) run-time initializing an empty matrix:

int result[n][m];
for(int i=0; i<n; ++i){
    for(int j=0; j<m; ++j){
       result[i][j] = 0;
    }
}

Notice, it uses two loops therefore inducing an O(n^2) run-time.

Here is a graph to show how equations grow over time: (and how fast they grow) Graph




回答2:


O(n) denotes the number of iterations, calculations or steps needed at most (worst case), for the algorithm to reach its end-state, n being the objects given at the start of the algorithm.

Suppose you've got an array of 5 elements, and a sorting algorithm that has O(n^2) complexity. You know that if you apply the sort on the array, it will need at most 5^2=25 steps to reach its end-state.

Also read: What is the difference between O, Ω, and Θ?



来源:https://stackoverflow.com/questions/27750064/algorithms-in-on2-vs-on

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!