complexity-theory

How is the implementation of LinkedHashMap different from HashMap?

て烟熏妆下的殇ゞ 提交于 2019-11-30 04:14:44
If LinkedHashMap's time complexity is same as HashMap's complexity why do we need HashMap? What are all the extra overhead LinkedHashMap has when compared to HashMap in Java? LinkedHashMap will take more memory. Each entry in a normal HashMap just has the key and the value. Each LinkedHashMap entry has those references and references to the next and previous entries. There's also a little bit more housekeeping to do, although that's usually irrelevant. If LinkedHashMap's time complexity is same as HashMap's complexity why do we need HashMap? You should not confuse complexity with performance.

Complexity of factorial recursive algorithm

不问归期 提交于 2019-11-30 03:17:58
Today in class my teacher wrote on the blackboard this recursive factorial algorithm: int factorial(int n) { if (n == 1) return 1; else return n * factorial(n-1); } She said that it has a cost of T(n-1) + 1 . Then with iteration method she said that T(n-1) = T(n-2) + 2 = T(n-3) + 3 ... T(n-j) + j , so the algorithm stops when n - j = 1 , so j = n - 1 . After that, she substituted j in T(n-j) + j , and obtained T(1) + n-1 . She directly said that for that n-1 = 2 (log 2 n-1) , so the cost of the algorithm is exponential. I really lost the last two steps. Can someone please explain them to me?

Quickly checking if set is superset of stored sets

狂风中的少年 提交于 2019-11-30 02:31:24
The problem I am given N arrays of C booleans. I want to organize these into a datastructure that allows me to do the following operation as fast as possible: Given a new array, return true if this array is a "superset" of any of the stored arrays. With superset I mean this: A is a superset of B if A[i] is true for every i where B[i] is true. If B[i] is false, then A[i] can be anything. Or, in terms of sets instead of arrays: Store N sets (each with C possible elements) into a datastructure so you can quickly look up if a given set is a superset of any of the stored sets. Building the

Fast weighted random selection from very large set of values

橙三吉。 提交于 2019-11-30 02:00:35
I'm currently working on a problem that requires the random selection of an element from a set. Each of the elements has a weight(selection probability) associated with it. My problem is that for sets with a small number of elements say 5-10, the complexity (running time) of the solution I was is acceptable, however as the number of elements increases say for 1K or 10K etc, the running time becomes unacceptable. My current strategy is: Select random value X with range [0,1) Iterate elements summing their weights until the sum is greater than X The element which caused the sum to exceed X is

What is the complexity of these Dictionary methods?

本秂侑毒 提交于 2019-11-30 01:20:41
问题 Can anyone explain what is the complexity of the following Dictionary methods? ContainsKey(key) Add(key,value); I'm trying to figure out the complexity of a method I wrote: public void DistinctWords(String s) { Dictionary<string,string> d = new Dictionary<string,string>(); String[] splitted = s.split(" "); foreach ( String ss in splitted) { if (!d.containskey(ss)) d.add(ss,null); } } I assumed that the 2 dictionary methods are of log(n) complexity where n is the number of keys in the

Quicksort complexity when all the elements are same?

孤街醉人 提交于 2019-11-29 23:11:19
I have an array of N numbers which are same.I am applying Quick sort on it. What should be the time complexity of the sorting in this case. I goggled around this question but did not get the exact explanation. Any help would be appreciated. This depends on the implementation of Quicksort. The traditional implementation which partitions into 2 ( < and >= ) sections will have O(n*n) on identical input. While no swaps will necessarily occur, it will cause n recursive calls to be made - each of which need to make a comparison with the pivot and n-recursionDepth elements. i.e. O(n*n) comparisons

Understanding Ukkonen's algorithm for suffix trees [duplicate]

三世轮回 提交于 2019-11-29 22:19:03
This question already has an answer here: Ukkonen's suffix tree algorithm in plain English 7 answers I'm doing some work with Ukkonen's algorithm for building suffix trees, but I'm not understanding some parts of the author's explanation for it's linear-time complexity. I have learned the algorithm and have coded it, but the paper which I'm using as the main source of information (linked bellow) is kinda confusing at some parts so it's not really clear for me why the algorithm is linear. Any help? Thanks. Link to Ukkonen's paper: http://www.cs.helsinki.fi/u/ukkonen/SuffixT1withFigs.pdf Find a

What is the time complexity of Python list's count() function?

 ̄綄美尐妖づ 提交于 2019-11-29 22:13:08
问题 I'm trying to figure what the time complexity of the count() function. Ex if there is a list of [1, 2, 2, 3] and [1, 2, 2, 3].count(2) is used. I've searched endlessly and looked at the Python wiki here, but its not there. The closest I've come to finding an answer is here, but the field for complexity happens to be empty... Does anyone what the answer is? 回答1: Dig into the CPython source code and visit Objects/listobject.c, you will find the source code for the count() method in there. It

Complexity of recursive factorial program

纵然是瞬间 提交于 2019-11-29 21:32:41
What's the complexity of a recursive program to find factorial of a number n ? My hunch is that it might be O(n) . Alex Martelli If you take multiplication as O(1) , then yes, O(N) is correct. However, note that multiplying two numbers of arbitrary length x is not O(1) on finite hardware -- as x tends to infinity, the time needed for multiplication grows (e.g. if you use Karatsuba multiplication , it's O(x ** 1.585) ). You can theoretically do better for sufficiently huge numbers with Schönhage-Strassen , but I confess I have no real world experience with that one. x , the "length" or "number

Learning efficient algorithms

怎甘沉沦 提交于 2019-11-29 21:06:36
Up until now I've mostly concentrated on how to properly design code, make it as readable as possible and as maintainable as possible. So I alway chose to learn about the higher level details of programming, such as class interactions, API design, etc. Algorithms I never really found particularly interesting. As a result, even though I can come up with a good design for my programs, and even if I can come up with a solution to a given problem it rarely is the most efficient. Is there a particular way of thinking about problems that helps you come up with an as efficient solution as possible,