Pseudo-quicksort time complexity

前端 未结 6 1015
不思量自难忘°
不思量自难忘° 2020-12-03 14:31

I know that quicksort has O(n log n) average time complexity. A pseudo-quicksort (which is only a quicksort when you look at it from far enough away, with a sui

6条回答
  •  予麋鹿
    予麋鹿 (楼主)
    2020-12-03 14:56

    I agree with your assumption that the average time complexity still is O(n log n). I'm not an expert and 100% sure, but these are my thoughts:

    This is a pseudo code of the in-place quicksort: (call quicksort with l=1 and r=length of the array)

    Quicksort(l,r)  
    --------------
    IF r-l>=1 THEN  
        choose pivot element x of {x_l,x_l+1,...,x_r-1,x_r}   
        order the array-segment x_l,...x_r in such a way that  
            all elements < x are on the left side of x // line 6  
            all elements > x are on the right side of x // line 7  
        let m be the position of x in the 'sorted' array (as said in the two lines above)  
        Quicksort(l,m-1);  
        Quicksort(m+1,r)  
    FI  
    

    The average time complexity analysis then reasons by selecting the "<"-comparisons in line 6 and 7 as the dominant operation in this algorithm and finally comes to the conclusion that the average time complexity is O(n log n). As the cost of line "order the array-segment x_l,...x_r in such a way that..." are not considered (only the dominant operation is important in time complexity analysis if you want to find bounds), I think "because it has to do two passes of the list when it partitions it" is not a problem, also as your Haskell version would just take approximately twice as long in this step. The same holds true for the appendix-operation and I agree with on that this adds nothing to the asymptotic costs:

    Because the appends and the partition still have linear time complexity, even if they are inefficient.

    For the sake of convenience lets assume that this adds up "n" to our time complexity costs, so that we have "O(n log n+n)". As there exists a natural number o for that n log n > n for all natural numbers greater than o holds true, you can estimate n log n +n to the top by 2 n log n and to the bottom by n log n, therefore n log n+n = O(n log n).

    Further, the choice of the first element as the pivot is not the best choice.

    I think the choice of the pivot element is irrelevant here, because in the average case analysis you assume uniform distribution of the elements in the array. You can't know from which place in the array you should select it, and you therefore have to consider all these cases in which your pivot-element (independently from which place of the list you take it) is the i-st smallest element of your list, for i=1...r.

提交回复
热议问题