complexity-theory

Why does this O(N^2) algorithm run so quickly?

落爺英雄遲暮 提交于 2019-12-13 02:41:08
问题 This algorithm is O(n 2 ), however it runs in less than a second. Why is it so quick? public class ScalabilityTest { public static void main(String[] args) { long oldTime = System.currentTimeMillis(); double[] array = new double[5000000]; for ( int i = 0; i < array.length; i++ ) { for ( int j = 0; j < i; j++ ) { double x = array[j] + array[i]; } } System.out.println( (System.currentTimeMillis()-oldTime) / 1000 ); } } EDIT: I modified the code to the following and now it runs very slowly.

What does Wikipedia mean when it says the complexity of inserting an item at the end of a dynamic array is O(1) amortized?

拥有回忆 提交于 2019-12-12 19:54:01
问题 http://en.wikipedia.org/wiki/Dynamic_array#Performance What exactly does it mean? I thought inserting at the end would be O(n), as you'd have to allocate say, twice the space of the original array, and then move all the items to that location and finally insert the item. How is this O(1)? 回答1: Amortized O(1) efficiency means that the sum of the runtimes of n insertions will be O(n), even if any individual operation may take a lot longer. You are absolutely correct that appending an element

complexity for a nested loop with growing internal loop

随声附和 提交于 2019-12-12 18:11:43
问题 I am confused to the time complexity of this piece of code and the logic used to find it. void doit(int N) { for (int k = 1; k < N; k *= 2) { <----I am guessing this runs O(logN) for (int j = 1; j < k; j += 1) { <------I am not sure how this one works. } } } I have already tried solving it out by hand writing it out. But, I still don't understand it. Thank you for your time. EDIT: Adding another question to it. Same concept, different format. void doit(int N) { int j, k; //I ended up getting

Sort an array which is partially sorted

最后都变了- 提交于 2019-12-12 17:39:41
问题 I am trying to sort an array which has properties like it increases upto some extent then it starts decreasing, then increases and then decreases and so on. Is there any algorithm which can sort this in less then nlog(n) complexity by making use of it being partially ordered? array example = 14,19,34,56,36,22,20,7,45,56,50,32,31,45......... upto n Thanks in advance 回答1: Any sequence of numbers will go up and down and up and down again etc unless they are already fully sorted (May start with a

Is possible to reduce the complexity and spaghetti quality of this Javascript algorithm solution?

自闭症网瘾萝莉.ら 提交于 2019-12-12 15:27:30
问题 Problem: Create a function that sums two arguments together. If only one argument is provided, then return a function that expects one argument and returns the sum. For example, addTogether(2, 3) should return 5, and addTogether(2) should return a function. Calling this returned function with a single argument will then return the sum: var sumTwoAnd = addTogether(2); sumTwoAnd(3) returns 5. If either argument isn't a valid number, return undefined. Solution should return: addTogether(2, 3)

Divide and conquer - why does it work?

拟墨画扇 提交于 2019-12-12 13:35:06
问题 I know that algorithms like mergesort and quicksort use the divide-and-conquer paradigm, but I'm wondering why does it work in lowering the time complexity... why does usually a "divide and conquer" algorithm work better than a non-divide-and-conquer one? 回答1: Divide and conquer works, because the mathematics supports it! Consider a few divide and conquer algorithms: 1) Binary search: This algorithm reduces your input space to half each time. It is intuitively clear that this is better than a

Division operation on asymptotic notation

泪湿孤枕 提交于 2019-12-12 11:41:57
问题 Suppose S(n) = Big-Oh(f(n)) & T(n) = Big-Oh(f(n)) both f(n) identically belongs from the same class. My ques is: Why S(n)/T(n) = Big-Oh(1) is incorrect? 回答1: Consider S(n) = n^2 and T(n) = n . Then both S and T are O(n^2) but S(n) / T(n) = n which is not O(1) . Here's another example. Consider S(n) = sin(n) and T(n) = cos(n) . Then S and T are O(1) but S(n) / T(n) = tan(n) is not O(1) . This second example is important because it shows that even if you have a tight bound, the conclusion can

How to get Omega(n)

无人久伴 提交于 2019-12-12 10:53:21
问题 I have the formula a(n) = n * a(n-1) +1 ; a(0) = 0 How can i get the Omega, Theta or O Notation from this without the Master Theorem or did anyone have a good site to understand the explanation 回答1: The Master theorem doesn't even apply, so not being able to use it isn't much of a restriction. An approach which works here is to guess upper and lower bounds, and then prove these guesses by induction if the guesses are good. a(0) = 0 a(1) = 1 a(2) = 3 a(3) = 10 a(4) = 41 A reasonable guess for

Would this algorithm run in O(n)?

南笙酒味 提交于 2019-12-12 10:38:26
问题 Note : This is problem 4.3 from Cracking the Coding Interview 5th Edition Problem :Given a sorted(increasing order) array, write an algorithm to create a binary search tree with minimal height Here is my algorithm, written in Java to do this problem public static IntTreeNode createBST(int[] array) { return createBST(array, 0, array.length-1); } private static IntTreeNode createBST(int[] array, int left, int right) { if(right >= left) { int middle = array[(left + right)/2; IntTreeNode root =

What is the time complexity of java.util.HashMap class' keySet() method?

和自甴很熟 提交于 2019-12-12 09:44:21
问题 I am trying to implement a plane sweep algorithm and for this I need to know the time complexity of java.util.HashMap class' keySet() method. I suspect that it is O(n log n). Am I correct? Point of clarification: I am talking about the time complexity of the keySet() method; iterating through the returned Set will take obviously O(n) time. 回答1: Actually, getting the keyset is O(1) and cheap. This is because HashMap.keyset() returns the actual KeySet object associated with the HashMap. The