Merge Sort Time and Space Complexity

后端 未结 7 1479
慢半拍i
慢半拍i 2020-12-04 14:47

Let\'s take this implementation of Merge Sort as an example

void mergesort(Item a[], int l, int r) {
if (r <= l) return;
int m = (r+l)/2;
mergesort(a, l,          


        
相关标签:
7条回答
  • 2020-12-04 14:57

    merge sort space complexity is O(nlogn), this is quite obvious considering that it can go to at maximum of O(logn) recursions and for each recursion there is additional space of O(n) for storing the merged array that needs to be reassigned. For those who are saying O(n) please don't forget that it is O(n) for reach stack frame depth.

    0 讨论(0)
  • 2020-12-04 14:59

    for both best and worst case the complexity is O(nlog(n)) . though extra N size of array is needed in each step so space complexity is O(n+n) is O(2n) as we remove constant value for calculating complexity so it is O(n)

    0 讨论(0)
  • 2020-12-04 15:10

    a) Yes - in a perfect world you'd have to do log n merges of size n, n/2, n/4 ... (or better said 1, 2, 3 ... n/4, n/2, n - they can't be parallelized), which gives O(n). It still is O(n log n). In not-so-perfect-world you don't have infinite number of processors and context-switching and synchronization offsets any potential gains.

    b) Space complexity is always Ω(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter.

    edit if you count stack frames, then it's O(n)+ O(log n) , so still O(n) in case of arrays. In case of lists it's O(log n) additional memory.

    c) Lists only need some pointers changed during the merge process. That requires constant additional memory.

    d) That's why in merge-sort complexity analysis people mention 'additional space requirement' or things like that. It's obvious that you have to store the elements somewhere, but it's always better to mention 'additional memory' to keep purists at bay.

    0 讨论(0)
  • 2020-12-04 15:13

    Worst-case performance of merge sort : O(n log n), Best-case performance of merge sort : O(n log n) typicaly, O(n) natural variant, Average performance of merge sort : O(n log n), Worst-case space complexity of merge sort : О(n) total, O(n) auxiliary

    0 讨论(0)
  • 2020-12-04 15:14

    a) Yes, of course, parallelizing merge sort can be very beneficial. It remains nlogn, but your constant should be significantly lower.

    b) Space complexity with a linked list should be O(n), or more specifically O(n) + O(logn). Note that that's a +, not a *. Don't concern yourself with constants much when doing asymptotic analysis.

    c) In asymptotic analysis, only the dominant term in the equation matters much, so the fact that we have a + and not a * makes it O(n). If we were duplicating the sublists all over, I believe that would be O(nlogn) space - but a smart linked-list-based merge sort can share regions of the lists.

    0 讨论(0)
  • 2020-12-04 15:20

    Simple and smart thinking.

    Total levels (L) = log2(N). At the last level number of nodes = N.

    step 1 : let's assume for all levels (i) having nodes = x(i).

    step 2 : so time complexity = x1 + x2 + x3 + x4 + .... + x(L-1) + N(for i = L);

    step 3 : fact we know , x1,x2,x3,x4...,x(L-1) < N

    step 4 : so let's consider x1=x2=x3=...=x(L-1)=N

    step 5 : So time complexity = (N+N+N+..(L)times)

    Time complexity = O(N*L); put L = log(N);

    Time complexity = O(N*log(N))

    We use the extra array while merging so,

    Space complexity: O(N).

    Hint: Big O(x) time means, x is the smallest time for which we can surely say with proof that it will never exceed x in average case

    0 讨论(0)
提交回复
热议问题