Difference in Space Complexity of different sorting algorithms

痞子三分冷 提交于 2021-02-07 04:14:54

问题


I am trying to understand Space Complexities of different sorting algorithms.

http://bigocheatsheet.com/?goback=.gde_98713_member_241501229
from the above link I found that the complexity of bubble sort,insertion and selection sort is O(1)
where as quick sort is O(log(n)) and merge sort is O(n).

we were actually not allocating extra memory in any of the algorithms. Then why the space complexities are different when we are using the same array to sort them?


回答1:


When you run code, memory is assigned in two ways:

  1. Implicitly, as you set up function calls.

  2. Explicitly, as you create chunks of memory.

Quicksort is a good example of implicit use of memory. While I'm doing a quicksort, I'm recursively calling myself O(n) times in the worst case, O(log(n)) in the average case. Those recursive calls each take O(1) space to keep track of, leading to a O(n) worst case and O(log(n)) average case.

Mergesort is a good example of explicit use of memory. I take two blocks of sorted data, create a place to put the merge, and then merge from those two into that merge. Creating a place to put the merge is O(n) data.

To get down to O(1) memory you need to both not assign memory, AND not call yourself recursively. This is true of all of bubble, insertion and selection sorts.




回答2:


It's important to keep in mind that there are a lot of different ways to implement each of these algorithms, and each different implementation has a different associated space complexity.

Let's start with merge sort. The most common implementation of mergesort on arrays works by allocating an external buffer in which to perform the merges of the individual ranges. This requires space to hold all the elements of the array, which takes extra space Θ(n). However, you could alternatively use an in-place merge for each merge, which means that the only extra space you'd need would be space for the stack frames of the recursive calls, dropping the space complexity down to Θ(log n), but increasing the runtime of the algorithm by a large constant factor. You could alternatively do a bottom-up mergesort using in-place merging, which requires only O(1) extra space but with a higher constant factor.

On the other hand, if you're merge sorting linked lists, then the space complexity is going to be quite different. You can merge linked lists in space O(1) because the elements themselves can easily be rewired. This means that the space complexity of merge sorting linked lists is Θ(log n) from the space needed to store the stack frames for the recursive calls.

Let's look at quicksort as another example. Quicksort doesn't normally allocate any external memory, but it does need space for the stack frames it uses. A naive implementation of quicksort might need space Θ(n) in the worst case for stack frames if the pivots always end up being the largest or smallest element of the array, since in that case you keep recursively calling the function on arrays of size n - 1, n - 2, n - 3, etc. However, there's a standard optimization you can perform that's essentially tail-call elimination: you recursively invoke quicksort on the smaller of the two halves of the array, then reuse the stack space from the current call for the larger half. This means that you only allocate new memory for a recursive call on subarrays of size at most n / 2, then n / 4, then n / 8, etc. so the space usage drops to O(log n).




回答3:


I'll assume the array we're sorting is passed by reference, and I'm assuming the space for the array does not count in the space complexity analysis.

The space complexity of quicksort can be made O(n) (and expected O(log n) for randomized quicksort) with clever implementation: e.g. don't copy the whole sub-arrays, but just pass on indexes.

The O(n) for quicksort comes from the fact that the number of "nested" recursive calls can be O(n): think of what happens if you keep making unlucky choices for the pivot. While each stack frame takes O(1) space, there can be O(n) stack frames. The expected depth (i.e. expected stack space) is O(log n) if we're talking about randomized quicksort.

For merge sort I'd expect the space complexity to be O(log n) because you make at most O(log n) "nested" recursive calls.

The results you're citing also count the space taken by the arrays: then the time complexity of merge sort is O(log n) for stack space plus O(n) for array, which means O(n) total space complexity. For quicksort it is O(n)+O(n)=O(n).



来源:https://stackoverflow.com/questions/36363550/difference-in-space-complexity-of-different-sorting-algorithms

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!