complexity-theory

Modification of Intersection of sorted array

◇◆丶佛笑我妖孽 提交于 2019-12-08 05:32:09
问题 I came across this problem - input - I am given two sorted arrays a1 and a2. I need to find the elements that are not present in the second array. I have two approaches 1) Hashtable - O(m+n) [use when second array is small] 2) Binary Search - O(m*logn) [use when second array is huge] Are there any other approaches with better time complexities? Thank You 回答1: Just iterate them in parallel. Here's a JavaScript example: var a1 = [1, 2, 3, 4, 5, 6, 7, 9]; var a2 = [0, 2, 4, 5, 8]; findNotPresent

Help calculating Big O

谁说我不能喝 提交于 2019-12-08 02:47:28
问题 I am trying to get the correct Big-O of the following code snippet: s = 0 for x in seq: for y in seq: s += x*y for z in seq: for w in seq: s += x-w According to the book I got this example from (Python Algorithms), they explain it like this: The z-loop is run for a linear number of iterations, and it contains a linear loop, so the total complexity there is quadratic, or Θ(n 2 ). The y-loop is clearly Θ(n). This means that the code block inside the x-loop is Θ(n + n 2 ). This entire block is

Maximum weighted bipartite matching _with_ directed edges

非 Y 不嫁゛ 提交于 2019-12-08 02:46:59
问题 I know various algorithms to compute the maximum weighted matching of weighted, undirected bipartite graphs (i.e. the assignment problem): For instance ... The Hungarian Algorithm, Bellman-Ford or even the Blossom algorithm (which works for general, i.e. not bipartite, graphs). However, how can I compute the maximum weighted matching if the edges of the bipartite graph are weighted and directed ? I would appreciate pointers to algorithms with polinomial complexity or prior transformations to

What is the time complexity of this multiplication algorithm?

与世无争的帅哥 提交于 2019-12-08 02:36:25
问题 For the classic interview question "How do you perform integer multiplication without the multiplication operator?", the easiest answer is, of course, the following linear-time algorithm in C: int mult(int multiplicand, int multiplier) { for (int i = 1; i < multiplier; i++) { multiplicand += multiplicand; } return multiplicand; } Of course, there is a faster algorithm. If we take advantage of the property that bit shifting to the left is equivalent to multiplying by 2 to the power of the

Are the two complexities O((2n + 1)!) and O(n!) equal?

我与影子孤独终老i 提交于 2019-12-07 16:17:30
问题 This may be a naive question but I am new to the concept of Big-O notation and complexity and could not found any answer for this. I am dealing with a problem for which the algorithm (2n + 1)! times check a condition. Can I say that the complexity of the problem is O(n!) or the complexity is O((2n + 1)!)? 回答1: Use Stirling's approximation: n! ~ (n / e)^n * sqrt(2 * pi * n) Then (2n + 1)! ~ ((2n + 1) / e)^(2n + 1) * sqrt(2 * pi * (2n + 1)) >= (2n / e)^(2n) * sqrt(2 * pi * 2n) = 2^2n * (n / e)^

How to find all brotherhood strings?

北城以北 提交于 2019-12-07 12:47:57
问题 I have a string, and another text file which contains a list of strings. We call 2 strings "brotherhood strings" when they're exactly the same after sorting alphabetically. For example, "abc" and "cba" will be sorted into "abc" and "abc", so the original two are brotherhood. But "abc" and "aaa" are not. So, is there an efficient way to pick out all brotherhood strings from the text file, according to the one string provided? For example, we have "abc" and a text file which writes like this:

Efficiently generating all possible permutations of a linked list?

白昼怎懂夜的黑 提交于 2019-12-07 11:39:02
问题 There are many algorithms for generating all possible permutations of a given set of values. Typically, those values are represented as an array, which has O(1) random access. Suppose, however, that the elements to permute are represented as a doubly-linked list. In this case, you cannot randomly access elements in the list in O(1) time, so many permutation algorithms will experience an unnecessary slowdown. Is there an algorithm for generating all possible permutations of a linked list with

why O(1) != O(log(n)) ? for n=[integer, long, …]

匆匆过客 提交于 2019-12-07 07:55:11
问题 for example, say n = Integer.MAX_VALUE or 2^123 then O(log(n)) = 32 and 123 so a small integer. isn't it O(1) ? what is the difference ? I think, the reason is O(1) is constant but O(log(n)) not. Any other ideas ? 回答1: If n is bounded above, then complexity classes involving n make no sense. There is no such thing as "in the limit as 2^123 approaches infinity", except in the old joke that "a pentagon approximates a circle, for sufficiently large values of 5". Generally, when analysing the

Time complexity of tuple in Python

て烟熏妆下的殇ゞ 提交于 2019-12-07 03:33:38
问题 There is similar question about hash (dictionaries) and lists, also there is a good piece of info here: http://wiki.python.org/moin/TimeComplexity But I didn't find anything about tuples. The access time for data_structure[i] for a linked list is in general O(n) for dictionary is ~ O(1) What about tuple? Is it O(n) like for a linked list or O(1) like for an array? 回答1: It's O(1) for both list and tuple. They are both morally equivalent to an integer indexed array. 回答2: Lists and tuples are

What is the meaning of O(M+N)?

孤者浪人 提交于 2019-12-07 03:27:28
问题 This is a basic question... but I'm thinking that O(M+N) is the same as O(max(M,N)), since the larger term should dominate as we go to infinity? Also, that would be different from O(min(M,N)), is that right? I keep seeing this notation, esp. when discussing graph algorithms. For example, you routinely see: O(|V| + |E|) (e.g., http://algs4.cs.princeton.edu/41undirected/). 回答1: Yes, O(M+N) means the same thing as O(max(M, N)). That is different than O(min(M, N)). As @Dr_Asik says, O(M+N) is