complexity-theory

Difference between O(n) and O(log(n)) - which is better and what exactly is O(log(n))?

萝らか妹 提交于 2019-11-29 20:25:39
This is my first course in data structures and every lecture / TA lecture , we talk about O(log(n)) . This is probably a dumb question but I'd appreciate if someone can explain to me exactly what does it mean !? It means that the thing in question (usually running time) scales in a manner that is consistent with the logarithm of its input size. Big-O notation doesn't mean an exact equation, but rather a bound . For instance, the output of the following functions is all O(n): f(x) = 3x g(x) = 0.5x m(x) = x + 5 Because as you increase x, their outputs all increase linearly - if there's a 6:1

When will the worst case of Merge Sort occur?

非 Y 不嫁゛ 提交于 2019-11-29 19:06:21
I know that worst case on mergesort is O(nlogn), the same as the average case. However, if the data are ascending or descending, this results to the minimum number of comparisons , and therefore mergesort becomes faster than random data. So my question is: What kind of input data produces the maximum number of comparisons that result to mergesort to be slower? The answer at this question says: For some sorting algorithms (e.g. quicksort), the initial order of the elements can affect the number of operations to be done. However it doesn't make any change for mergesort as it will have to do

Differences between time complexity and space complexity?

北城以北 提交于 2019-11-29 18:47:33
I have seen that in most cases the time complexity is related to the space complexity and vice versa. For example in an array traversal: for i=1 to length(v) print (v[i]) endfor Here it is easy to see that the algorithm complexity in terms of time is O(n), but it looks to me like the space complexity is also n (also represented as O(n)?). My question: is it possible that an algorithm has different time complexity than space complexity? stan0 The time and space complexities are not related to each other. They are used to describe how much space/time your algorithm takes based on the input. For

General rules for simplifying SQL statements

假如想象 提交于 2019-11-29 18:42:10
I'm looking for some "inference rules" (similar to set operation rules or logic rules) which I can use to reduce a SQL query in complexity or size. Does there exist something like that? Any papers, any tools? Any equivalencies that you found on your own? It's somehow similar to query optimization, but not in terms of performance. To state it different: Having a (complex) query with JOINs, SUBSELECTs, UNIONs is it possible (or not) to reduce it to a simpler, equivalent SQL statement, which is producing the same result, by using some transformation rules? So, I'm looking for equivalent

Do you use Big-O complexity evaluation in the 'real world'?

对着背影说爱祢 提交于 2019-11-29 16:57:30
问题 Recently in an interview I was asked several questions related to the Big-O of various algorithms that came up in the course of the technical questions. I don't think I did very well on this... In the ten years since I took programming courses where we were asked to calculate the Big-O of algorithms I have not have one discussion about the 'Big-O' of anything I have worked on or designed. I have been involved in many discussions with other team members and with the architects I have worked

Time complexity of this primality testing algorithm?

点点圈 提交于 2019-11-29 16:45:13
I have the following code which determines whether a number is prime: public static boolean isPrime(int n){ boolean answer = (n>1)? true: false; for(int i = 2; i*i <= n; ++i) { System.out.printf("%d\n", i); if(n%i == 0) { answer = false; break; } } return answer; } How can I determine the big-O time complexity of this function? What is the size of the input in this case? Think about the worst-case runtime of this function, which happens if the number is indeed prime. In that case, the inner loop will execute as many times as possible. Since each iteration of the loop does a constant amount of

what is order of complexity in Big O notation?

孤者浪人 提交于 2019-11-29 15:38:01
问题 Question Hi I am trying to understand what order of complexity in terms of Big O notation is. I have read many articles and am yet to find anything explaining exactly 'order of complexity', even on the useful descriptions of Big O on here. What I already understand about big O The part which I already understand. about Big O notation is that we are measuring the time and space complexity of an algorithm in terms of the growth of input size n. I also understand that certain sorting methods

Generate all subset sums within a range faster than O((k+N) * 2^(N/2))?

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-29 14:37:33
Is there a way to generate all of the subset sums s 1 , s 2 , ..., s k that fall in a range [A,B] faster than O((k+N)*2 N/2 ), where k is the number of sums there are in [A,B]? Note that k is only known after we have enumerated all subset sums within [A,B]. I'm currently using a modified Horowitz-Sahni algorithm. For example, I first call it to for the smallest sum greater than or equal to A, giving me s 1 . Then I call it again for the next smallest sum greater than s 1 , giving me s 2 . Repeat this until we find a sum s k+1 greater than B. There is a lot of computation repeated between each

How to count distinct values in a list in linear time?

瘦欲@ 提交于 2019-11-29 14:01:50
I can think of sorting them and then going over each element one by one but this is nlogn. Is there a linear method to count distinct elements in a list? Update: - distinct vs. unique If you are looking for "unique" values (As in if you see an element "JASON" more than once, than it is no longer unique and should not be counted) You can do that in linear time by using a HashMap ;) (The generalized / language-agnostic idea is Hash table ) Each entry of a HashMap / Hash table is <KEY, VALUE> pair where the keys are unique (but no restrictions on their corresponding value in the pair) Step 1:

Efficient Algorithms for Computing a matrix times its transpose [closed]

拈花ヽ惹草 提交于 2019-11-29 13:52:07
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 4 months ago . For a class, a question that was posed by my teacher was the algorithmic cost of multiplying a matrix times its transpose. With the standard 3 loop matrix multiplication algorithm, the efficiency is O(N^3), and I wonder if there was a way to manipulate or take advantage of matrix