complexity-theory

Observing quadratic behavior with quicksort - O(n^2)

旧城冷巷雨未停 提交于 2019-11-27 04:46:05
问题 The quicksort algorithm has an average time complexity of O(n*log(n)) and a worst case complexity of O(n^2). Assuming some variant of Hoare’s quicksort algorithm, what kinds of input will cause the quicksort algorithm to exhibit worst case complexity? Please state any assumptions relating to implementation details regarding the specific quicksort algorithm such as pivot selection, etc. or if it's sourced from a commonly available library such as libc. Some reading: A Killer Adversary for

Number of Comparisons in Merge-Sort

只愿长相守 提交于 2019-11-27 04:43:46
问题 I was studying the merge-sort subject that I ran into this concept that the number of comparisons in merge-sort (in the worst-case, and according to Wikipedia) equals (n ⌈lg n⌉ - 2 ⌈lg n⌉ + 1); in fact it's between (n lg n - n + 1) and (n lg n + n + O(lg n)). The problem is that I cannot figure out what these complexities try to say. I know O(nlogn) is the complexity of merge-sort but the number of comparisons? 回答1: Why to count comparisons There are basically two operations to any sorting

How can I interleave or create unique permutations of two strings (without recursion)

て烟熏妆下的殇ゞ 提交于 2019-11-27 04:41:18
The question is to print all possible interleavings of two given strings. So I wrote a working code in Python which runs like this: def inter(arr1,arr2,p1,p2,arr): thisarr = copy(arr) if p1 == len(arr1) and p2 == len(arr2): printarr(thisarr) elif p1 == len(arr1): thisarr.extend(arr2[p2:]) printarr(thisarr) elif p2 == len(arr2): thisarr.extend(arr1[p1:]) printarr(thisarr) else: thisarr.append(arr1[p1]) inter(arr1,arr2,p1+1,p2,thisarr) del thisarr[-1] thisarr.append(arr2[p2]) inter(arr1,arr2,p1,p2+1,thisarr) return It comes at each point in a string, then for one recursive call, it considers the

Time complexity of unshift() vs. push() in Javascript

…衆ロ難τιáo~ 提交于 2019-11-27 04:16:09
问题 I know what is the difference between unshift() and push() methods in JavaScript, but I'm wondering what is the difference in time complexity? I suppose for push() method is O(1) because you're just adding an item to the end of array, but I'm not sure for unshift() method, because, I suppose you must "move" all the other existing elements forward and I suppose that is O(log n) or O(n)? 回答1: The JavaScript language spec does not mandate the time complexity of these functions, as far as I know.

how to determine if the kth largest element of the heap is greater than x

守給你的承諾、 提交于 2019-11-27 04:03:59
问题 Consider a binary heap containing n numbers (the root stores the greatest number). You are given a positive integer k < n and a number x. You have to determine whether the kth largest element of the heap is greater than x or not. Your algorithm must take O(k) time. You may use O(k) extra storage 回答1: Simple dfs can do the job. We have a counter set to zero. Start from the root and in each iteration check the value of current node; if it is greater than x, then increase the counter and

Programmatically obtaining Big-O efficiency of code

大城市里の小女人 提交于 2019-11-27 03:57:30
I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be able to visually ascertain which is which; I'm thinking there must be some heuristic solution which enables this to be done automatically. Any ideas? Edit: I am happy to find a semi-automated solution, just wondering whether there is some way of avoiding doing a fully manual analysis. It sounds like what you are asking for is an extention of the Halting Problem. I do not believe that such a thing

Time complexity of python set operations?

喜你入骨 提交于 2019-11-27 03:42:49
What is the the time complexity of each of python's set operations in Big O notation? I am using Python's set type for an operation on a large number of items. I want to know how each operation's performance will be affected by the size of the set. For example, add , and the test for membership: myset = set() myset.add('foo') 'foo' in myset Googling around hasn't turned up any resources, but it seems reasonable that the time complexity for Python's set implementation would have been carefully considered. If it exists, a link to something like this would be great. If nothing like this is out

Is partitioning an array into halves with equal sums P or NP?

南笙酒味 提交于 2019-11-27 03:39:54
问题 This was an algorithm interview question about the Partition Problem. You are given an array which consists of numbers with between 0 and 5 digits. Write a function which will return whether the array can be divided into 2 halves such a that sum of the two halves would be equal. Is this an NP problem or can it be solved by dynamic programming? "between 0 and 5 digit" means 0 ~ 99999, I think. I found a good answer to this outside SO here. 回答1: Both. The problem is in NP, and I'm pretty sure

Upper bound vs lower bound for worst case running time of an algorithm

假如想象 提交于 2019-11-27 02:44:46
问题 I am learning about analysis of algorithms. I understand the concept of the worst case running time of an algorithm. However, what are upper and lower bounds on the worst case running time of an algorithm? What can be an example where an upper bound for the worst case running time of an algorithm is different from the lower bound for the worst case running time of the same algorithm? 回答1: For a function f(n) , g(n) is an upper bound (big O) if for "big enough n", f(n)<=c*g(n) , for a constant

Why is bubble sort O(n^2)?

放肆的年华 提交于 2019-11-27 02:39:21
问题 int currentMinIndex = 0; for (int front = 0; front < intArray.length; front++) { currentMinIndex = front; for (int i = front; i < intArray.length; i++) { if (intArray[i] < intArray[currentMinIndex]) { currentMinIndex = i; } } int tmp = intArray[front]; intArray[front] = intArray[currentMinIndex]; intArray[currentMinIndex] = tmp; } The inner loop is iterating: n + (n-1) + (n-2) + (n-3) + ... + 1 times. The outer loop is iterating: n times. So you get n * (the sum of the numbers 1 to n) Isn't