complexity-theory

Optimizing Worst Case Time complexity to O(1) for python dicts [closed]

こ雲淡風輕ζ 提交于 2019-11-29 08:44:34
I have to store 500M two digit unicode character in memory (RAM). The data structure I use should have: Worst Case Space Complexity: O(n) Worst Case Time Complexity: O(1) <-- insertion, read, update, deletion I was thinking of choosing dict which is implementation of hash in python but then problem is it assures time complexity of O(1) for the required operations only in average cases than worst case. I heard that if number of entries is known, time complexity of O(1) can be achieved in worst case scenarios. How todo that? In case, thats not possible in python can I access memory addresses and

2^n complexity algorithm

ぃ、小莉子 提交于 2019-11-29 08:42:58
问题 I need to implement and test an algorithm with a 2^n complexity. I have been trying to find one for a while. If there is any way I can acheive this by implementation -- with a exact complexity of 2^n that would be optimal. If anyone knows of a location I can find an example, or could help me implement one, that would be awesome :-). The basic operation can be anything, but a single statment like i++; would be best. 回答1: Generate all subsets of a set with n elements. Added. The simplest way of

The complexity of n choose 2 is in Theta (n^2)?

醉酒当歌 提交于 2019-11-29 06:14:52
问题 I'm reading Introduction to Algorithms 3rd Edition (Cormen and Rivest) and on page 69 in the "A brute-force solution" they state that n choose 2 = Theta (n^2). I would think it would be in Theta (n!) instead. Why is n choose 2 tightly bound to n squared? Thanks! 回答1: n choose 2 is n(n - 1) / 2 This is n 2 / 2 - n/2 We can see that n(n-1)/2 = Θ(n 2 ) by taking the limit of their ratios as n goes to infinity: lim n → ∞ (n 2 / 2 - n / 2) / n 2 = 1/2 Since this comes out to a finite, nonzero

How many comparisons will binary search make in the worst case using this algorithm?

别说谁变了你拦得住时间么 提交于 2019-11-29 06:12:12
问题 Hi there below is the pseudo code for my binary search implementation: Input: (A[0...n-1], K) begin l ← 0; r ← n-1 while l ≤ r do m ← floor((l+r)/2) if K > A[m] then l ← m+1 else if K < A[m] then r ← m-1 else return m end if end while return -1 // key not found end I was just wondering how to calculate the number of comparisons this implementation would make in the worst case for a sorted array of size n? Would the number of comparisons = lg n + 1? or something different? 回答1: The worst-case

Avoid O(n^2) complexity for collision detection

做~自己de王妃 提交于 2019-11-29 02:59:56
问题 I am developing a simple tile-based 2D game. I have a level, populated with objects that can interact with the tiles and with each other. Checking collision with the tilemap is rather easy and it can be done for all objects with a linear complexity. But now I have to detect collision between the objects and now I have to check every object against every other object, which results in square complexity. I would like to avoid square complexity. Is there any well-known methods to reduce

c++ practical computational complexity of <cmath> SQRT()

两盒软妹~` 提交于 2019-11-29 02:23:51
What is the difference in CPU cycles (or, in essence, in 'speed') between x /= y; and #include <cmath> x = sqrt(y); EDIT: I know the operations aren't equivalent, I'm just arbitrarily proposing x /= y as a benchmark for x = sqrt(y) osgx The answer to your question depends on your target platform. Assuming you are using most common x86 cpus, I can give you this link http://instlatx64.atw.hu/ This is a collection of measured instruction latency (How long will it take to CPU to get result after it has argument) and how they are pipelined for many x86 and x86_64 processors. If your target is not

How do I explain what a “naive implementation” is? [closed]

蹲街弑〆低调 提交于 2019-11-29 02:14:46
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . What is the clearest explanation of what computer scientists mean by "the naive implementation"? I need a good clear example which

How is the implementation of LinkedHashMap different from HashMap?

走远了吗. 提交于 2019-11-29 02:13:03
问题 If LinkedHashMap's time complexity is same as HashMap's complexity why do we need HashMap? What are all the extra overhead LinkedHashMap has when compared to HashMap in Java? 回答1: LinkedHashMap will take more memory. Each entry in a normal HashMap just has the key and the value. Each LinkedHashMap entry has those references and references to the next and previous entries. There's also a little bit more housekeeping to do, although that's usually irrelevant. 回答2: If LinkedHashMap's time

Why siftDown is better than siftUp in heapify?

烈酒焚心 提交于 2019-11-29 01:43:23
To build a MAX heap tree, we can either siftDown or siftUp , by sifting down we start from the root and compare it to its two children, then we replace it with the larger element of the two children, if both children are smaller then we stop, otherwise we continue sifting that element down until we reach a leaf node (or of course again, until that element is larger that both of its children). Now we will only need to do that n/2 times, because the number of leaves is n/2 , and the leaves will satisfy the heap property when we finish heapifying the last element on the level before the last

C++ set: counting elements less than a value

跟風遠走 提交于 2019-11-29 00:39:34
问题 Assuming a I have an STL set <int> s and an int x , how can I count the number of elements in s that are less than x ? I'm seeking an O(log n) (or similar; anything that's reasonably better than O(n) ) solution; I already know about std::distance(s.begin(), s.lower_bound(x)) , but that's O(n) , I believe, because set s aren't random-access. 回答1: What you need is an 'order-statistics tree'. It is essentially an augmented (binary search) tree that supports the additional operation rank(x) which