quicksort

Can we do Quick sort with n logn worst case complexity?

我与影子孤独终老i 提交于 2019-12-01 01:11:56
问题 I was wondering if we can somehow modify the Quick sort algorithm to produce the worst case time complexity of O(n logn). Although this can be done by permuting data and then assuming that we will get the average case complexity rather than worst case. But this is not a full proof solution as we can again land into worst case after permuting. Is there any other way around that you can suggest. 回答1: Well, Yes we can bring it down to O(nlogn). All the algorithms I have seen that try to bring

Python quicksort - List comprehension vs Recursion (partition routine)

有些话、适合烂在心里 提交于 2019-11-30 20:11:10
I watched the talk Three Beautiful Quicksorts and was messing around with quicksort. My implementation in python was very similar to c (select pivot, partition around it and recursing over smaller and larger partitions). Which I thought wasn't pythonic . So this is the implementation using list comprehension in python. def qsort(list): if list == []: return [] pivot = list[0] l = qsort([x for x in list[1:] if x < pivot]) u = qsort([x for x in list[1:] if x >= pivot]) return l + [pivot] + u Lets call the recursion metho qsortR. now I noticed that qsortR runs much slower than qsort for large(r)

How does the compare function in qsort work?

点点圈 提交于 2019-11-30 10:00:46
I found this sample code online, which explains how the qsort function works. I could not understand what the compare function returns. #include "stdlib.h" int values[] = { 88, 56, 100, 2, 25 }; int cmpfunc (const void * a, const void * b) //what is it returning? { return ( *(int*)a - *(int*)b ); //What is a and b? } int main(int argc, _TCHAR* argv[]) { int n; printf("Before sorting the list is: \n"); for( n = 0 ; n < 5; n++ ) { printf("%d ", values[n]); } qsort(values, 5, sizeof(int), cmpfunc); printf("\nAfter sorting the list is: \n"); for( n = 0 ; n < 5; n++ ) { printf("%d ", values[n]); }

True QuickSort in Standard ML

断了今生、忘了曾经 提交于 2019-11-30 09:55:24
问题 Since RosettaCode's Standard ML solution is a very slow version of Quicksort according to the question (and discussion) "Why is the minimalist, example Haskell quicksort not a "true" quicksort?", how would a functional Quicksort look like in Standard ML if it behaved according to the complexity of Hoare's algoritm? fun quicksort [] = [] | quicksort (x::xs) = let val (left, right) = List.partition (fn y => y<x) xs in quicksort left @ [x] @ quicksort right end That is, one that employs some

C OpenMP parallel quickSort

我怕爱的太早我们不能终老 提交于 2019-11-30 09:19:23
Once again I'm stuck when using openMP in C++. This time I'm trying to implement a parallel quicksort. Code: #include <iostream> #include <vector> #include <stack> #include <utility> #include <omp.h> #include <stdio.h> #define SWITCH_LIMIT 1000 using namespace std; template <typename T> void insertionSort(std::vector<T> &v, int q, int r) { int key, i; for(int j = q + 1; j <= r; ++j) { key = v[j]; i = j - 1; while( i >= q && v[i] > key ) { v[i+1] = v[i]; --i; } v[i+1] = key; } } stack<pair<int,int> > s; template <typename T> void qs(vector<T> &v, int q, int r) { T pivot; int i = q - 1, j = r; /

Quick Sort in Ruby language

妖精的绣舞 提交于 2019-11-30 08:37:36
问题 I am trying to implement Quick sort in ruby but stuck in how to call recursively after the first partition of pivot. Please help me to understand on how to proceed and also let me know whether my style of coding is good so far . class QuickSort $array= Array.new() $count=0 def add(val) #adding values to sort i=0 while val != '000'.to_i $array[i]= val.to_i i=i+1 val = gets.to_i end end def firstsort_aka_divide(val1,val2,val3) #first partition $count = $count+1 @pivot = val1 @left = val2 @right

Explanation of the Median of Medians algorithm

。_饼干妹妹 提交于 2019-11-30 07:26:14
问题 The Median of medians approach is very popular in quicksort type partitioning algorithms to yield a fairly good pivot, such that it partitions the array uniformly. Its logic is given in Wikipedia as: The chosen pivot is both less than and greater than half of the elements in the list of medians, which is around n/10 elements (1/2 * (n/5)) for each half. Each of these elements is a median of 5, making it less than 2 other elements and greater than 2 other elements outside the block. Hence, the

Using red black trees for sorting

人盡茶涼 提交于 2019-11-30 06:49:45
The worst-case running time of insertion on a red-black tree is O(lg n) and if I perform a in-order walk on the tree, I essentially visit each node, so the total worst-case runtime to print the sorted collection would be O(n lg n) I am curious, why are red-black trees not preferred for sorting over quick sort (whose average-case running time is O(n lg n) . I see that maybe because red-black trees do not sort in-place, but I am not sure, so maybe someone could help. Knowing which sort algorithm performs better really depend on your data and situation. If you are talking in general/practical

Why should Insertion Sort be used after threshold crossover in Merge Sort

旧街凉风 提交于 2019-11-30 04:58:08
问题 I have read everywhere that for divide and conquer sorting algorithms like Merge-Sort and Quicksort , instead of recursing until only a single element is left, it is better to shift to Insertion-Sort when a certain threshold, say 30 elements, is reached. That is fine, but why only Insertion-Sort ? Why not Bubble-Sort or Selection-Sort , both of which has similar O(N^2) performance? Insertion-Sort should come handy only when many elements are pre-sorted (although that advantage should also

why is merge sort preferred over quick sort for sorting linked lists

大兔子大兔子 提交于 2019-11-29 18:59:30
I read the following in a forum : Merge sort is very efficient for immutable datastructures like linked lists and Quick sort is typically faster than merge sort when the data is stored in memory. However, when the data set is huge and is stored on external devices such as a hard drive, merge sort is the clear winner in terms of speed. It minimizes the expensive reads of the external drive and when operating on linked lists, merge sort only requires a small constant amount of auxiliary storage Can someone help me understand the above argument? why is merge sort preferred for sorting huge linked