amortized-analysis

Efficiency of growing a dynamic array by a fixed constant each time?

送分小仙女□ 提交于 2019-12-04 07:56:48
So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when it is full? (instead of doubling) When we resize by some constant C, it the time complexity always O(n)? templatetypedef If you grow by some fixed constant C, then no, the runtime will not be O(n). Instead, it will be Θ(n 2 ). To see this, think about what happens if you do a sequence of C consecutive operations. Of those operations, C - 1 of them

Haskell collections with guaranteed worst-case bounds for every single operation?

*爱你&永不变心* 提交于 2019-12-04 02:42:30
Such structures are necessary for real-time applications - for example user interfaces. (Users don't care if clicking a button takes 0.1s or 0.2s, but they do care if the 100th click forces an outstanding lazy computation and takes 10s to proceed.) I was reading Okasaki's thesis Purely functional data structures and he describes an interesting general method for converting lazy data structures with amortized bounds into structures with the same worst-case bounds for every operation . The idea is to distribute computations so that at each update some portion of unevaluated thunks is forced. I

Union/find algorithm without union by rank for disjoint-set forests data structure

て烟熏妆下的殇ゞ 提交于 2019-12-03 15:44:42
问题 Here's a breakdown on the union/find algorithm for disjoint set forests on wikipedia: Barebone disjoint-set forests... ( O(n) ) ... with union by rank ... (now improved to O(log(n) ) ... with path compression (now improved to O(a(n)) , effectively O(1) ) Implementing union by rank necessitates that each node keeps a rank field for comparison purposes. My question is, is union by rank worth this additional space? What happens if I skip union by rank and just do path compression instead? Is it

Union/find algorithm without union by rank for disjoint-set forests data structure

佐手、 提交于 2019-12-03 06:03:55
Here's a breakdown on the union/find algorithm for disjoint set forests on wikipedia : Barebone disjoint-set forests... ( O(n) ) ... with union by rank ... (now improved to O(log(n) ) ... with path compression (now improved to O(a(n)) , effectively O(1) ) Implementing union by rank necessitates that each node keeps a rank field for comparison purposes. My question is, is union by rank worth this additional space? What happens if I skip union by rank and just do path compression instead? Is it good enough? What is the amortized complexity now? A comment is made that implies that union by rank

amortized analysis on min-heap?

南楼画角 提交于 2019-11-30 18:24:51
问题 If on empty min heap we doing n arbitrary insert and delete operations, (with given location of delete in min-heap). why the amortized analysis for insert is O(1) and delete is O(log n) ? a) insert O(log n), delete O(1) b) insert O(log n), delete O(log n) c) insert O(1), delete O(1) d) insert O(1), delete O(log n) any person could clarify it for me? 回答1: Based on your question and responses to comments, I'm going to assume a binary heap. First, the worst case for insertion is O(log n) and the

Why is the time complexity of python's list.append() method O(1)?

前提是你 提交于 2019-11-29 01:48:55
问题 As seen in the documentation for TimeComplexity, Python's list type is implemented is using an array. So if an array is being used and we do a few appends, eventually you will have to reallocate space and copy all the information to the new space. After all that, how can it be O(1) worst case ? 回答1: If you look at the footnote in the document you linked, you can see that they include a caveat: These operations rely on the "Amortized" part of "Amortized Worst Case". Individual actions may take

Amortized Analysis of Algorithms

我是研究僧i 提交于 2019-11-28 07:52:01
I am currently reading amortized analysis. I am not able to fully understand how it is different from normal analysis we perform to calculate average or worst case behaviour of algorithms. Can someone explain it with an example of sorting or something ? Amortized analysis gives the average performance (over time) of each operation in the worst case. In a sequence of operations the worst case does not occur often in each operation - some operations may be cheap, some may be expensive Therefore, a traditional worst-case per operation analysis can give overly pessimistic bound. For example, in a

Amortized complexity in layman's terms?

末鹿安然 提交于 2019-11-27 17:23:02
Can someone explain amortized complexity in layman's terms? I've been having a hard time finding a precise definition online and I don't know how it entirely relates to the analysis of algorithms. Anything useful, even if externally referenced, would be highly appreciated. Amortized complexity is the total expense per operation, evaluated over a sequence of operations. The idea is to guarantee the total expense of the entire sequence, while permitting individual operations to be much more expensive than the amortized cost. Example: The behavior of C++ std::vector<> . When push_back() increases

What is amortized analysis of algorithms? [closed]

孤者浪人 提交于 2019-11-27 06:00:36
How is it different from asymptotic analysis? When do you use it, and why? I've read some articles that seem to have been written well, like these: http://www.ugrad.cs.ubc.ca/~cs320/2010W2/handouts/aa-nutshell.pdf http://www.cs.princeton.edu/~fiebrink/423/AmortizedAnalysisExplained_Fiebrink.pdf but I've still not understood fully these concepts. So, can anyone please simplify it for me? Amortized analysis doesn't naively multiply the number of invocations with the worst case for one invocation. For example, for a dynamic array that doubles in size when needed, normal asymptotic analysis would

Amortized complexity in layman&#39;s terms?

假如想象 提交于 2019-11-26 15:16:16
问题 Can someone explain amortized complexity in layman's terms? I've been having a hard time finding a precise definition online and I don't know how it entirely relates to the analysis of algorithms. Anything useful, even if externally referenced, would be highly appreciated. 回答1: Amortized complexity is the total expense per operation, evaluated over a sequence of operations. The idea is to guarantee the total expense of the entire sequence, while permitting individual operations to be much