amortized-analysis

Amortized analysis on Heap

风格不统一 提交于 2019-12-24 14:27:55
问题 When I ran to this topic. I read in this book on the bottom of page 5-1 that Binomial Queues , Fibonacci Heap and Skew Heap have O(1) amortized cost for insert operations and O(log n) amortized cost of delete operations. Next the authors write that the Pairing Heap has O(1) amortized cost for insert operations and O(log n) amortized cost for delete operations. on this homework the third (3) assignment and solution on this link without defining the type of heap wrote O(log n) for insert and O

How can I ensure amortized O(n) concatenation from Data.Vector?

半世苍凉 提交于 2019-12-13 13:55:51
问题 I have an application where it is efficient to use Vectors for one part of the code. However, during the computation I need to keep track of some of the elements. I have heard that you can get O(n) amortized concatenation from Data.Vectors (by the usual array growing trick) but I think that I am not doing it right. So lets say we have the following setup: import Data.Vector((++),Vector) import Prelude hiding ((++)) import Control.Monad.State.Strict data App = S (Vector Int) add :: Vector Int

Big-O: Getting all of the keys in a Java HashMap

我是研究僧i 提交于 2019-12-13 06:25:20
问题 Anyone know what the amortized analysis is of keySet in Java HashMap? O(1) ? Is iterating through them O(n) ? 回答1: map.keySet() simply returns a reference to the key set which is stored in the map, so it clearly is an O(1) operation. The iteration is then a loop over that set, which itself internally uses a loop over the map's buckets, so the operation takes a time proportional to n+m where n is the size of the keyset and m the capacity of the map. So if your map has a very large capacity

Updating maximum sum subinteral in an array in sublinear time when an adjacent transposition is applied

白昼怎懂夜的黑 提交于 2019-12-13 04:02:28
问题 I asked this question for general transpositions and it seemed too hard, I only got one answer which didn't seem to give a guaranteed asymptotic speed-up. So suppose we apply a sequence of adjacent transpositions to a numeric array (an adjacent transposition swaps two adjacent numbers) and we want to maintain the solution of the maximum sum subinterval after each adjacent transposition. We could repeat Kadane's linear time solution from scratch on the entire array after every adjacent

More appropriate to say Amortized O(1) vs O(n) for insertion into unsorted dynamic array?

余生颓废 提交于 2019-12-12 15:09:19
问题 This falls under "a software algorithm" from stackoverflow.com/help/on-topic, in this case, a software algorithm to add an item to a dynamic unsorted array This is chart we made in class about the runtimes of operations on different data structures The question I have is about the runtime for inserting(or adding) a value into the dynamic unsorted array. Here is our code for doing this public void insert(E value) { ensureCapacity(size + 1); elementData[size] = value; size++; } private void

need to find the amortized cost of a sequence using the potential function method

女生的网名这么多〃 提交于 2019-12-06 05:09:30
问题 There is a sequence of n operations, The ith operation costs 2i if i is an exact power of 2, costs 3i if i is an exact power of 3, and 1 for all other operations. Hi first up I want to say that it is a homework problem and I don't want you to solve it for me. I have solved it using the aggregate method. For which I summed up the series of powers of 2 and series of powers of 3 and got amortized cost of 10. I then checked it using the accounting method, for really long sequences and it did not

Efficiency of growing a dynamic array by a fixed constant each time?

一个人想着一个人 提交于 2019-12-06 03:23:46
问题 So when a dynamic array is doubled in size each time an element is added, I understand how the time complexity for expanding is O(n) n being the elements. What about if the the array is copied and moved to a new array that is only 1 size bigger when it is full? (instead of doubling) When we resize by some constant C, it the time complexity always O(n)? 回答1: If you grow by some fixed constant C, then no, the runtime will not be O(n). Instead, it will be Θ(n 2 ). To see this, think about what

Haskell collections with guaranteed worst-case bounds for every single operation?

淺唱寂寞╮ 提交于 2019-12-05 15:24:40
问题 Such structures are necessary for real-time applications - for example user interfaces. (Users don't care if clicking a button takes 0.1s or 0.2s, but they do care if the 100th click forces an outstanding lazy computation and takes 10s to proceed.) I was reading Okasaki's thesis Purely functional data structures and he describes an interesting general method for converting lazy data structures with amortized bounds into structures with the same worst-case bounds for every operation . The idea

Design a stack that can also dequeue in O(1) amortized time?

微笑、不失礼 提交于 2019-12-05 13:22:51
I have an abstract data type that can be viewed as a list stored left to right, with the following possible operations: Push: add a new item to the left end of the list Pop: remove the item on the left end of the list Pull: remove the item on the right end of the list Implement this using three stacks and constant additional memory, so that the amortized time for any push, pop, or pull operation is constant. The stacks have basic operations, isEmpty, Push, and Pop. Amortized time means "If I spend this amount of time, I can spend another block of it and store it in a bank of time to be used

need to find the amortized cost of a sequence using the potential function method

♀尐吖头ヾ 提交于 2019-12-04 10:17:40
There is a sequence of n operations, The ith operation costs 2i if i is an exact power of 2, costs 3i if i is an exact power of 3, and 1 for all other operations. Hi first up I want to say that it is a homework problem and I don't want you to solve it for me. I have solved it using the aggregate method. For which I summed up the series of powers of 2 and series of powers of 3 and got amortized cost of 10. I then checked it using the accounting method, for really long sequences and it did not fail. But my problem is how to prove that it would never fail, I can show for as long sequence I want