quicksort

Writing a parallel quick sort in c

南楼画角 提交于 2019-12-08 06:09:48
问题 I need to write a parallel quick sort in c using pthreads. This is what I did so far. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <pthread.h> #include <unistd.h> // sleep() #include <stdio.h> #include <stdlib.h> // EXIT_SUCCESS #include <string.h> // strerror() #include <errno.h> #define SIZE_OF_DATASET 6 void* quickSort( void* data); int partition( int* a, int, int); struct info { int start_index; int* data_set; int end_index; }; int main(int argc, char **argv) { int

Why choose a random pivot in quicksort

萝らか妹 提交于 2019-12-08 05:57:50
问题 So choosing a pivot at random has O(n 2 ) running at worst case but when the pivot is chosen as the average of min value and max value of the list you get a worst case O(n log n). Of course there are the added 2*O(n) on each recursion due to finding the min and max values as opposed to the constant O(1) of the random generator has. When implementing this as the pivot you get the list sorted at the leaves of the recursion tree instead in the standard algorithm elements get sorted from the root

3 partition mergesort

北战南征 提交于 2019-12-08 05:11:20
问题 My professor assigned my class with implementing mergesort in arrays with 3-part partitioning and merging. That was the exact question from the professor. Problem is I have found no such thing as a 3-way mergesort I only know of a 3-way quicksort so I thought that he probably meant to take an array, split it into 3 parts and then mergesort those 3 parts together and I'm doing this by mergesorting the first 2 parts together and then mergesorting the combined part with the 3rd part. Did I think

Hoare's partition not working correctly (quicksort)

时间秒杀一切 提交于 2019-12-08 04:57:19
问题 So after following the quicksort and hoares partition algorithm from Cormen, this is the code that I was able to produce. The array comes out partly sorted with uninitialized elements/garbage elements and I can't for the life of me figure out why... I thought I followed the algorithm exactly as the book writes it. Here is the pseudocode straight from the book: HOARE-PARTITION(A, p, r) 1 x = A[p] 2 i = p - 1 3 j = r + 1 4 while TRUE 5 repeat 6 j = j - 1 7 until A[j] <= x 8 repeat 9 i = i + 1

Prove 3-Way Quicksort Big-O Bound

这一生的挚爱 提交于 2019-12-08 04:50:16
问题 For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it? 回答1: There's a subtle difference between finding the complexity of an algorithm and proving it. To find the complexity of this algorithm, you can do as amit said in the other answer: you know that in average , you split your problem of size n into three smaller problems of size n/3 , so you will get, in è log_3(n)` steps in average, to problems of size 1. With

Prove 3-Way Quicksort Big-O Bound

ε祈祈猫儿з 提交于 2019-12-07 19:39:27
For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it? There's a subtle difference between finding the complexity of an algorithm and proving it. To find the complexity of this algorithm, you can do as amit said in the other answer: you know that in average , you split your problem of size n into three smaller problems of size n/3 , so you will get, in è log_3(n)` steps in average, to problems of size 1. With experience, you will start getting the feeling of this approach and be able to deduce the complexity of

Is Quick Sort a Divide & Conquer approach? [closed]

廉价感情. 提交于 2019-12-07 16:19:53
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 2 years ago . I consider Merge sort as divide and conquer because, Divide - Array is literally divided into sub arrays without any processing (compare/swap), and the problem sized is halved/Quartered/.... Conquer - merge() those sub arrays by processing (compare/swap) Code gives an

Parallelizing quicksort makes it slower

风流意气都作罢 提交于 2019-12-07 06:20:14
问题 I'm quicksorting over a very large amount of data, and for fun am trying to parallelize it to speed up the sort. However, in it's current form, the multithreaded version is slower than the singlethreaded version due to synchronization chokepoints. Every time I spawn a thread, I get a lock on an int and increment it, and every time the thread finishes I again get a lock and decrement, in addition to checking if there are any threads still running (int > 0). If not, I wake up my main thread and

QuickSort's estimation of recursion depth

半腔热情 提交于 2019-12-07 03:57:00
问题 Being the recursion depth the maximum number of successive recursive calls before QuickSort hits it´s base case, and noting that it (recursion depth) is a random variable, since it depends on the chosen pivot. What I want is to estimate the minimum-possible and maximum-possible recursion depth of QuickSort. The following procedure describes the way thats QuickSort is normally implemented: QUICKSORT(A,p,r) if p<r q ← PARTITION(A,p,r) QUICKSORT(A,p,q−1) QUICKSORT(A,q+1,r) return A PARTITION(A,p

Haskell's quicksort - what is it really? [duplicate]

北城余情 提交于 2019-12-07 00:34:55
问题 This question already has answers here : Pseudo-quicksort time complexity (6 answers) Closed 6 years ago . As they say, "true quicksort sorts in-place". So the standard short Haskell code for quicksort, quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs what algorithm/computational process is it describing, after all? It surely isn't what Tony Hoare devised, lacking