complexity-theory

Computational complexity of TreeSet operations in Java?

≯℡__Kan透↙ 提交于 2019-12-30 06:15:20
问题 I am trying to clear up some things regarding complexity in some of the operations of TreeSet. On the javadoc it says: "This implementation provides guaranteed log(n) time cost for the basic operations (add, remove and contains)." So far so good. My question is what happens on addAll(), removeAll() etc. Here the javadoc for Set says: "If the specified collection is also a set, the addAll operation effectively modifies this set so that its value is the union of the two sets." Is it just

Code complexity analysis tools beyond cyclomatic complexity

試著忘記壹切 提交于 2019-12-30 04:05:33
问题 While cyclomatic complexity is a worthwhile metric, I tend to find it to be a poor tool for identifying difficult to maintain code. In particular, I tend to find it just highlights certain types of code (e.g. parsers) and misses difficult recursion, threading and coupling problems as well as many of the anti-patterns that have been defined. What other tools are available to identify problematic Java code ? Note, we already use PMD and FindBugs which I believe are great for method level

prove that binary heap build max comparsion is (2N-2)

微笑、不失礼 提交于 2019-12-29 01:44:08
问题 I am trying to prove that for binary heaps, buildHeap does at most (2N-2) comparisons between elements. I find it very difficult to prove this claim. 回答1: The build-heap algorithm starts at the midpoint and moves items down as required. Let's consider a heap of 127 items (7 levels). In the worst case: 64 nodes (the leaf level) don't move at all 32 nodes move down one level 16 nodes move down two levels 8 nodes move down three levels 4 nodes move down four levels 2 nodes move down five levels

Why does heap sort have a space complexity of O(1)?

試著忘記壹切 提交于 2019-12-28 09:13:10
问题 I understand that both quick sort and merge sort need O(n) auxiliary space for the temporary sub-arrays that are constructed, and in-place quick sort requires O(log n) auxiliary space for the recursive stack frames. But for heap sort, it seems like it also has a worst case of O(n) auxiliary space to build the temporary heap, even if the nodes are just pointers to the actual elements. I came across this explanation : Only O(1) additional space is required because the heap is built inside the

Haskell GHC: what is the time complexity of a pattern match with N constructors?

我的梦境 提交于 2019-12-28 03:46:28
问题 Let's say we have the following Haskell: data T = T0 | T1 | T2 | ... | TN toInt :: T -> Int toInt t = case t of T0 -> 0 T1 -> 1 T2 -> 2 ... TN -> N What algorithm is used to perform the pattern match here? I see two options: (1) Linear search, something like if (t.tag == T0) { ... } else if (t.tag == T1) { ... } else ... (2) Binary search, which would be sensible in this specific task: searching for t.tag in the set { TO ... T1023 }. However, where pattern matching in general has many other

How can I interleave or create unique permutations of two strings (without recursion)

半腔热情 提交于 2019-12-28 02:06:39
问题 The question is to print all possible interleavings of two given strings. So I wrote a working code in Python which runs like this: def inter(arr1,arr2,p1,p2,arr): thisarr = copy(arr) if p1 == len(arr1) and p2 == len(arr2): printarr(thisarr) elif p1 == len(arr1): thisarr.extend(arr2[p2:]) printarr(thisarr) elif p2 == len(arr2): thisarr.extend(arr1[p1:]) printarr(thisarr) else: thisarr.append(arr1[p1]) inter(arr1,arr2,p1+1,p2,thisarr) del thisarr[-1] thisarr.append(arr2[p2]) inter(arr1,arr2,p1

Programmatically obtaining Big-O efficiency of code

邮差的信 提交于 2019-12-28 02:00:08
问题 I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be able to visually ascertain which is which; I'm thinking there must be some heuristic solution which enables this to be done automatically. Any ideas? Edit: I am happy to find a semi-automated solution, just wondering whether there is some way of avoiding doing a fully manual analysis. 回答1: It

What is the complexity of this simple piece of code?

ⅰ亾dé卋堺 提交于 2019-12-27 22:13:05
问题 I'm pasting this text from an ebook I have. It says the complexity if O(n 2 ) and also gives an explanation for it, but I fail to see how. Question: What is the running time of this code? public String makeSentence(String[] words) { StringBuffer sentence = new StringBuffer(); for (String w : words) sentence.append(w); return sentence.toString(); } The answer the book gave: O(n 2 ), where n is the number of letters in sentence. Here’s why: each time you append a string to sentence, you create

What is the complexity of this simple piece of code?

ε祈祈猫儿з 提交于 2019-12-27 22:08:17
问题 I'm pasting this text from an ebook I have. It says the complexity if O(n 2 ) and also gives an explanation for it, but I fail to see how. Question: What is the running time of this code? public String makeSentence(String[] words) { StringBuffer sentence = new StringBuffer(); for (String w : words) sentence.append(w); return sentence.toString(); } The answer the book gave: O(n 2 ), where n is the number of letters in sentence. Here’s why: each time you append a string to sentence, you create

Time complexity of the program?

女生的网名这么多〃 提交于 2019-12-25 18:24:30
问题 1 i ← 1 2 while i < n/4 3 do 4 j ← 2i 5 while j < n 6 do 7 j ← j + 1 8 i ← i + 1 b) 1 i ← n 2 while i > 1 3 do 4 j ← i 5 while j < n 6 do 7 j ← 2j 8 i ← i − 1 c) 1 i ← 1 2 while i < n 3 do 4 j ← 0 5 while j ≤ i 6 do 7 j ← j + 1 8 i ← 2i Given these three programs, what would be the easiest approach in finding the time complexity for each one of these? I can tell that the first one is probably going to be O(n^2). But is there an easy approach to these to solve it consistently? I have an exam