complexity-theory

Lattice paths algorithm does not finish running for 20 X 20 grid

痴心易碎 提交于 2019-12-23 17:56:37
问题 I wrote the following code in python to solve problem 15 from Project Euler: grid_size = 2 def get_paths(node): global paths if node[0] >= grid_size and node[1] >= grid_size: paths += 1 return else: if node[0]<grid_size+1 and node[1] < grid_size+1: get_paths((node[0]+1,node[1])) get_paths((node[0],node[1]+1)) return paths def euler(): print get_paths((0,0)) paths = 0 if __name__ == '__main__': euler() Although it runs quite well for a 2 X 2 grid, it's been running for hours for a 20 X 20 grid

Complexity of inserting sorted range into associative container

让人想犯罪 __ 提交于 2019-12-23 16:26:54
问题 The standard specifies (23.4.4.2:5, etc.) that constructing all four ordered associative containers ( map , multimap , set , multiset ) from a range [first, last) shall be linear in N = last - first if the range is already sorted. For merging a range (e.g. a container of the same type) into an existing container, however, 23.2.4:8 table 102 specifies only that insert ing a range [i, j) into the container shall have complexity N log(a.size() + N) where N = distance(i, j) . This would seem to

Finding the function complexity index

吃可爱长大的小学妹 提交于 2019-12-23 16:04:06
问题 I have to find the complexity of a C file based on the number of lines. I have found the number of lines. But how to decide whether it is a complex file or not? Based on certain value, I have to give it an index. For eg., complexity index - 5 for high complexity. On which basis can I index it? More than 1000 lines for high complex, for eg., won't apply for all. Is there any standard way for giving conditions('more than 1000 lines')? Any kind of suggestions are welcome, except any pre-defined

Negative Coefficients in Polynomial time Complexity

梦想的初衷 提交于 2019-12-23 13:26:13
问题 Assuming some algorithm has a polynomial time complexity T(n) , is it possible for any of the terms to have a negative coefficient? Intuitively, the answer seems like an obvious "No" since there is no part of any algorithm that reduces the existing amount of time taken by previous steps but I want to be certain. 回答1: When talking about polynomial complexity, only the coefficient with the highest degree counts. But I think you can have T(n) = n*n - n = n*(n-1). The n-1 would represent

Negative Coefficients in Polynomial time Complexity

心已入冬 提交于 2019-12-23 13:26:00
问题 Assuming some algorithm has a polynomial time complexity T(n) , is it possible for any of the terms to have a negative coefficient? Intuitively, the answer seems like an obvious "No" since there is no part of any algorithm that reduces the existing amount of time taken by previous steps but I want to be certain. 回答1: When talking about polynomial complexity, only the coefficient with the highest degree counts. But I think you can have T(n) = n*n - n = n*(n-1). The n-1 would represent

C++ algorithm to find 'maximal difference' in an array

我的未来我决定 提交于 2019-12-23 13:13:27
问题 I am asking for your ideas regarding this problem: I have one array A, with N elements of type double (or alternatively integer). I would like to find an algorithm with complexity less than O(N 2 ) to find: max A[i] - A[j] For 1 < j <= i < n. Please notice that there is no abs() . I thought of: dynamic programming dichotomic method, divide and conquer some treatment after a sort keeping track of indices Would you have some comments or ideas? Could you point at some good ref to train or make

inplace_merge: What causes a complexity of N*log(N) vs. N-1?

馋奶兔 提交于 2019-12-23 12:47:39
问题 From C++ documentation on inplace_merge, the complexity of the algorithm is "Linear in comparisons (N-1) if an internal buffer was used, NlogN otherwise (where N is the number elements in the range [first,last))". What do they mean by an internal buffer and what causes a complexity of O(N-1) vs. O(NlogN)? 回答1: An internal buffer is simply a buffer allocated by the library of sufficient size to hold the output of the merge while the merge is happening (it's copied back to the original range

Why is the Ackermann function related to the amortized complexity of union-find algorithm used for disjoint sets?

…衆ロ難τιáo~ 提交于 2019-12-23 07:56:04
问题 Can anybody give me an intuitive explanation of why the Ackermann function http://en.wikipedia.org/wiki/Ackermann_function is related to the amortized complexity of union-find algorithm used for disjoint sets http://en.wikipedia.org/wiki/Disjoint-set_data_structure? The analysis in Tarjan's data structure book isn't very intuitive. I also looked it up in Introduction to Algorithms, but it also seems too rigorous and non-intuitive. Thanks for your help! 回答1: Applied to Disjoint-set forests

new BigInteger(String) performance / complexity

落爺英雄遲暮 提交于 2019-12-23 07:15:08
问题 I'm wondering about the performance/ complexity of constructing BigInteger objects with the new BigInteger(String) constructor. Consider the following method: public static void testBigIntegerConstruction() { for (int exp = 1; exp < 10; exp++) { StringBuffer bigNumber = new StringBuffer((int) Math.pow(10.0, exp)); for (int i = 0; i < Math.pow(10.0, exp - 1); i++) { bigNumber.append("1234567890"); } String val = bigNumber.toString(); long time = System.currentTimeMillis(); BigInteger bigOne =

why do std::sort and partial_sort require random-access iterators?

我是研究僧i 提交于 2019-12-23 07:12:12
问题 I was wondering why does the c++ standard require that std::sort should only take random-access iterators? I don't see the advantage, since both std::sort and std::list::sort have a complexity of N*log(N) . Restricting std::sort to random-access iterators (RAI) seems to have made it necessary to write a separate function for lists with the same complexity. The same applies to partial_sort , where the non-RAI counter-part for list is simply missing to this day. Is this design because people