i have code for mergesort using linked list,it works fine,my question what is complexity of this algorithm?is it O(nlog(n))?also is it stable?i am interested because as i
You've got a typo in your code. With it corrected, it is indeed stable, and of O(n log n) complexity. Although to be sure, you really should reimplement your merge as a loop instead of recursion. C doesn't have tail call optimization (right?), so this can mess things up there:
struct node *mergesort(struct node *head){
struct node *head_one;
struct node *head_two;
if((head==NULL) ||(head->next==NULL))
return head;
head_one=head;
head_two=head->next;
while( (head_two!=NULL) &&(head_two->next!=NULL)){
head=head->next;
// head_two=head->next->next; // -- the typo, corrected:
head_two=head_two->next->next;
}
head_two=head->next;
head->next=NULL;
return merge(mergesort(head_one),mergesort(head_two));
}
And while we're at it, change your workflow from
return merge(mergesort(head_one),mergesort(head_two));
to
struct node *p1, *p2;
// ......
p1 = mergesort(head_one);
p2 = mergesort(head_two);
return merge(p1,p2);
it'll be much easier on the stack this way (will use much less of it).
In general, this here is what's known as top-down mergesort. You could also do it in a bottom-up fashion, by initially sorting the consecutive chunks of two elements each, then merging them into (thus, now, sorted) chunks of 4 elements, then merging those pairwise, into chunks of 8 elements, etc., until only one chunk is left - the sorted list.
To get extra fancy (and efficient), instead of starting with the 2-chunks, start by splitting the list into monotonic runs, i.e. increasing sequences, and decreasing sequences - re-linking the latter ones in reverse as you go - thus segmenting the original list according to its innate order, so it's likely there will be fewer initial chunks to merge; then proceed merging those pairwise repeatedly, as before, until only one is left in the end.