complexity-theory

What is the lookup time complexity of HashSet<T>(IEqualityComparer<T>)?

非 Y 不嫁゛ 提交于 2019-11-27 02:38:57
问题 In C#.NET, I like using HashSets because of their supposed O(1) time complexity for lookups. If I have a large set of data that is going to be queried, I often prefer using a HashSet to a List, since it has this time complexity. What confuses me is the constructor for the HashSet, which takes IEqualityComparer as an argument: http://msdn.microsoft.com/en-us/library/bb359100.aspx In the link above, the remarks note that the "constructor is an O(1) operation," but if this is the case, I am

Efficiently find all connected induced subgraphs

可紊 提交于 2019-11-27 02:19:48
问题 Is there an efficient(*) algorithm to find all the connected (induced) subgraphs of a connected undirected vertex-labelled graph? (*) I appreciate that, in the general case, any such algorithm may have O(2^n) complexity because, for a clique (Kn), there are 2^n connected subgraphs. However, the graphs I'm typically dealing with will have far fewer connected subgraphs, so I'm looking for a way to generate them without having to consider all 2^n subgraphs and throw away those that aren't

Intersection complexity

こ雲淡風輕ζ 提交于 2019-11-27 02:13:33
问题 In Python you can get the intersection of two sets doing: >>> s1 = {1, 2, 3, 4, 5, 6, 7, 8, 9} >>> s2 = {0, 3, 5, 6, 10} >>> s1 & s2 set([3, 5, 6]) >>> s1.intersection(s2) set([3, 5, 6]) Anybody knows the complexity of this intersection ( & ) algorithm? EDIT: In addition, does anyone know what is the data structure behind a Python set? 回答1: The answer appears to be a search engine query away. You can also use this direct link to the Time Complexity page at python.org. Quick summary: Average:

Finding the first n largest elements in an array

大城市里の小女人 提交于 2019-11-27 02:13:08
问题 I have got an array containing unique elements. I need to find out the first n largest elements in the array in the least complexity possible. The solution that I could think of so far has a complexity of O(n^2). int A[]={1,2,3,8,7,5,3,4,6}; int max=0; int i,j; int B[4]={0,0,0,0,};//where n=4; for(i=0;i<A.length();i++) { if(A[i]>max) max=A[i]; } B[0]=max; for(i=1;i<n;i++){ max=0; for(j=0;j<A.length();j++){ if(A[j]>max&&A[j]<B[i-1]) max=A[j]; } B[i]=max; } Please, if anyone can come up with a

algorithm to find longest non-overlapping sequences

爱⌒轻易说出口 提交于 2019-11-27 01:41:29
问题 I am trying to find the best way to solve the following problem. By best way I mean less complex. As an input a list of tuples (start,length) such: [(0,5),(0,1),(1,9),(5,5),(5,7),(10,1)] Each element represets a sequence by its start and length , for example (5,7) is equivalent to the sequence (5,6,7,8,9,10,11) - a list of 7 elements starting with 5. One can assume that the tuples are sorted by the start element. The output should return a non-overlapping combination of tuples that represent

Lower bound on heapsort?

旧巷老猫 提交于 2019-11-27 01:38:14
问题 It's well-known that the worst-case runtime for heapsort is Ω(n lg n), but I'm having trouble seeing why this is. In particular, the first step of heapsort (making a max-heap) takes time Θ(n). This is then followed by n heap deletions. I understand why each heap deletion takes time O(lg n); rebalancing the heap involves a bubble-down operation that takes time O(h) in the height of the heap, and h = O(lg n). However, what I don't see is why this second step should take Ω(n lg n). It seems like

What is the complexity of this simple piece of code?

删除回忆录丶 提交于 2019-11-27 01:17:58
I'm pasting this text from an ebook I have. It says the complexity if O(n 2 ) and also gives an explanation for it, but I fail to see how. Question: What is the running time of this code? public String makeSentence(String[] words) { StringBuffer sentence = new StringBuffer(); for (String w : words) sentence.append(w); return sentence.toString(); } The answer the book gave: O(n 2 ), where n is the number of letters in sentence. Here’s why: each time you append a string to sentence, you create a copy of sentence and run through all the letters in sentence to copy them over If you have to iterate

Constructing efficient monad instances on `Set` (and other containers with constraints) using the continuation monad

主宰稳场 提交于 2019-11-27 01:09:30
问题 Set , similarly to [] has a perfectly defined monadic operations. The problem is that they require that the values satisfy Ord constraint, and so it's impossible to define return and >>= without any constraints. The same problem applies to many other data structures that require some kind of constraints on possible values. The standard trick (suggested to me in a haskell-cafe post) is to wrap Set into the continuation monad. ContT doesn't care if the underlying type functor has any

Intuitive explanation for why QuickSort is n log n?

陌路散爱 提交于 2019-11-26 23:56:44
问题 Is anybody able to give a 'plain english' intuitive, yet formal, explanation of what makes QuickSort n log n? From my understanding it has to make a pass over n items, and it does this log n times...Im not sure how to put it into words why it does this log n times. 回答1: Each partitioning operation takes O(n) operations (one pass on the array). In average, each partitioning divides the array to two parts (which sums up to log n operations). In total we have O(n * log n) operations. I.e. in

multiset, map and hash map complexity

爱⌒轻易说出口 提交于 2019-11-26 23:37:31
I would like to know the complexity in Big O notation of the STL multiset, map and hash map classes when: inserting entries accessing entries retrieving entries comparing entries map, set, multimap, and multiset These are implemented using a red-black tree , a type of balanced binary search tree . They have the following asymptotic run times: Insertion: O(log n) Lookup: O(log n) Deletion: O(log n) hash_map, hash_set, hash_multimap, and hash_multiset These are implemented using hash tables . They have the following runtimes: Insertion: O(1) expected, O(n) worst case Lookup: O(1) expected, O(n)