complexity-theory

Why is bubble sort O(n^2)?

旧时模样 提交于 2019-11-28 09:06:26
int currentMinIndex = 0; for (int front = 0; front < intArray.length; front++) { currentMinIndex = front; for (int i = front; i < intArray.length; i++) { if (intArray[i] < intArray[currentMinIndex]) { currentMinIndex = i; } } int tmp = intArray[front]; intArray[front] = intArray[currentMinIndex]; intArray[currentMinIndex] = tmp; } The inner loop is iterating: n + (n-1) + (n-2) + (n-3) + ... + 1 times. The outer loop is iterating: n times. So you get n * (the sum of the numbers 1 to n) Isn't that n * ( n*(n+1)/2 ) = n * ( (n^2) + n/2 ) Which would be (n^3) + (n^2)/2 = O(n^3) ? I am positive I

Do iterative and recursive versions of an algorithm have the same time complexity?

左心房为你撑大大i 提交于 2019-11-28 08:31:46
Say, for example, the iterative and recursive versions of the Fibonacci series. Do they have the same time complexity? The answer depends strongly on your implementation. For the example you gave there are several possible solutions and I would say that the naive way to implement a solution has better complexity when implemented iterative. Here are the two implementations: int iterative_fib(int n) { if (n <= 2) { return 1; } int a = 1, b = 1, c; for (int i = 0; i < n - 2; ++i) { c = a + b; b = a; a = c; } return a; } int recursive_fib(int n) { if (n <= 2) { return 1; } return recursive_fib(n -

Intersection complexity

我们两清 提交于 2019-11-28 08:27:24
In Python you can get the intersection of two sets doing: >>> s1 = {1, 2, 3, 4, 5, 6, 7, 8, 9} >>> s2 = {0, 3, 5, 6, 10} >>> s1 & s2 set([3, 5, 6]) >>> s1.intersection(s2) set([3, 5, 6]) Anybody knows the complexity of this intersection ( & ) algorithm? EDIT: In addition, does anyone know what is the data structure behind a Python set? The answer appears to be a search engine query away . You can also use this direct link to the Time Complexity page at python.org . Quick summary: Average: O(min(len(s), len(t)) Worst case: O(len(s) * len(t)) EDIT: As Raymond points out below, the "worst case"

Finding the first n largest elements in an array

邮差的信 提交于 2019-11-28 08:26:41
I have got an array containing unique elements. I need to find out the first n largest elements in the array in the least complexity possible. The solution that I could think of so far has a complexity of O(n^2). int A[]={1,2,3,8,7,5,3,4,6}; int max=0; int i,j; int B[4]={0,0,0,0,};//where n=4; for(i=0;i<A.length();i++) { if(A[i]>max) max=A[i]; } B[0]=max; for(i=1;i<n;i++){ max=0; for(j=0;j<A.length();j++){ if(A[j]>max&&A[j]<B[i-1]) max=A[j]; } B[i]=max; } Please, if anyone can come up with a better solution which involves less complexity, I will be highly grateful. And I don't intend to change

Algorithmic complexity of PHP function strlen()

爷,独闯天下 提交于 2019-11-28 07:00:08
问题 Recently I was asked this question on interview and I didn't know how to answer it. Can anyone answer this question and describe it? 回答1: O(1) since the length is stored as an attribute: source However, this trivia is worth countering with a discussion about micro-optimising theatre, as kindly provided by our hosts here and here; read those two links and you'll find a good talking point to change the momentum of the conversation next time similar questions come up, regardless of whether you

Lower bound on heapsort?

泄露秘密 提交于 2019-11-28 06:58:29
It's well-known that the worst-case runtime for heapsort is Ω(n lg n), but I'm having trouble seeing why this is. In particular, the first step of heapsort (making a max-heap) takes time Θ(n). This is then followed by n heap deletions. I understand why each heap deletion takes time O(lg n); rebalancing the heap involves a bubble-down operation that takes time O(h) in the height of the heap, and h = O(lg n). However, what I don't see is why this second step should take Ω(n lg n). It seems like any individual heap dequeue wouldn't necessarily cause the node moved to the top to bubble all the way

algorithm to find longest non-overlapping sequences

亡梦爱人 提交于 2019-11-28 06:55:40
I am trying to find the best way to solve the following problem. By best way I mean less complex. As an input a list of tuples (start,length) such: [(0,5),(0,1),(1,9),(5,5),(5,7),(10,1)] Each element represets a sequence by its start and length , for example (5,7) is equivalent to the sequence (5,6,7,8,9,10,11) - a list of 7 elements starting with 5. One can assume that the tuples are sorted by the start element. The output should return a non-overlapping combination of tuples that represent the longest continuous sequences(s). This means that, a solution is a subset of ranges with no overlaps

Constructing efficient monad instances on `Set` (and other containers with constraints) using the continuation monad

主宰稳场 提交于 2019-11-28 05:48:14
Set , similarly to [] has a perfectly defined monadic operations. The problem is that they require that the values satisfy Ord constraint, and so it's impossible to define return and >>= without any constraints. The same problem applies to many other data structures that require some kind of constraints on possible values. The standard trick (suggested to me in a haskell-cafe post ) is to wrap Set into the continuation monad. ContT doesn't care if the underlying type functor has any constraints. The constraints become only needed when wrapping/unwrapping Set s into/from continuations: import

Time Complexity of two for loops [duplicate]

与世无争的帅哥 提交于 2019-11-28 05:25:24
This question already has an answer here: How to find time complexity of an algorithm 9 answers What is a plain English explanation of “Big O” notation? 39 answers So I know that the time complexity of: for(i;i<x;i++){ for(y;y<x;y++){ //code } } is n^2 but would: for(i;i<x;i++){ //code } for(y;y<x;y++){ //code } be n+n? Since the big-O notation is not about comparing absolute complexity, but only relative one, O(n+n) is in fact the same as O(n). Each time you double x, your code will take twice as long as it did before and that means O(n). Whether your code runs through 2, 4 or 20 loops doesn

Explanation of Algorithm for finding articulation points or cut vertices of a graph

谁都会走 提交于 2019-11-28 04:32:18
I have searched the net and could not find any explanation of a DFS algorithm for finding all articulation vertices of a graph. There is not even a wiki page. From reading around, I got to know the basic facts from here. PDF There is a variable at each node which is actually looking at back edges and finding the closest and upmost node towards the root node. After processing all edges it would be found. But I do not understand how to find this down & up variable at each node during the execution of DFS. What is this variable doing exactly? Please explain the algorithm. Thanks. Ashish Negi