complexity-theory

Linear complexity and quadratic complexity

≯℡__Kan透↙ 提交于 2020-01-11 06:28:06
问题 I'm just not sure... If you have a code that can be executed in either of the following complexities: A sequence of O(n), like for example: two O(n) in sequence O(n²) The preferred version would be the one that can be executed in linear time. Would there be a time such that the sequence of O(n) would be too much and that O(n²) would be preferred? In other words, is the statement C x O(n) < O(n²) always true for any constant C? Why or why not? What are the factors that would affect the

Is list::size() really O(n)?

给你一囗甜甜゛ 提交于 2020-01-08 11:44:05
问题 Recently, I noticed some people mentioning that std::list::size() has a linear complexity. According to some sources, this is in fact implementation dependent as the standard doesn't say what the complexity has to be. The comment in this blog entry says: Actually, it depends on which STL you are using. Microsoft Visual Studio V6 implements size() as {return (_Size); } whereas gcc (at least in versions 3.3.2 and 4.1.0) do it as { return std::distance(begin(), end()); } The first has constant

Generalizing O(log log n) time complexity

为君一笑 提交于 2020-01-06 08:05:24
问题 I want to generalise how we get O(log log n)` time complexity. Yesterday I asked this question, in which I came to know, this: for(i=1 ; i<n ; i*=2) { ... } leads to O(log n) time complexity. By multiplying by 2 in each iteration, we are essentially taking next power of 2: For O(log n) i=i*2 ============ 1*2 = 2^1 = 2 2*2 = 2^2 = 4 4*2 = 2^3 = 8 and so on So to get O(log log n) time complexity, I need to take next double power (I coined this term "double power" for sake of convenience),

How to know if your Unit Test is “right-sized”?

和自甴很熟 提交于 2020-01-06 04:20:25
问题 One thing that I've always noticed with my unit tests is that they get to be kind of verbose; seeing as they could also be not verbose enough, how do you get a sense of when your unit tests are the right size? I know of a good quote for this and it's: "Perfection is achieved, not when there is nothing left to add, but when there is nothing left to remove." - Antoine de Saint-Exupery. 回答1: One reason why them become verbose is that they're testing multiple things. I try to make each unit test

Finding the median of the merged array of two sorted arrays in O(logN)?

╄→гoц情女王★ 提交于 2020-01-04 06:45:40
问题 Refering to the solution present at MIT handout I have tried to figure out the solution myself but have got stuck and I believe I need help to understand the following points. In the function header used in the solution MEDIAN -SEARCH (A[1 . . l], B[1 . . m], max(1,n/2 − m), min(l, n/2)) I do not understand the last two arguments why not simply 1, l why the max and min respectively. In the pseduo code if left > right why do we switch A and B arrays if we reach the above condition. Thanking

Algorithm Complexity and Efficiency, Exponential operation java

牧云@^-^@ 提交于 2020-01-04 06:35:41
问题 I have a list of strings. I have a set of numbers: {1, 2, 3, 4} and I need to generate all combinations(?) (strings) to check against my list, Combinations: (1, 2, 3, 4), (1234), (1, 2, 3, 4), (123, 4), (12, 34), (1, 2, 34), (1, 234), (1, 23, 4), (1, 23), (1, 2, 3), (1 2), ((1 2), (3 4))...etc. This problem grows larger as my set of numbers gets larger. Is it right that this is a bad problem to use recursion for? (that is what I have now) However, aren't the space requirements stricter for an

Complexity of STL max_element

微笑、不失礼 提交于 2020-01-04 04:07:07
问题 So according to the link here: http://www.cplusplus.com/reference/algorithm/max_element/ , the max_element function is O(n), apparently for all STL containers. Is this correct? Shouldn't it be O(log n) for a set (implemented as a binary tree)? On a somewhat related note, I've always used cplusplus.com for questions which are easier to answer, but I would be curious what others think of the site. 回答1: It's linear because it touches every element. It's pointless to even use it on a set or other

Worst Case Time Complexity for an algorithm

孤街醉人 提交于 2020-01-04 02:19:09
问题 What is the Worst Case Time Complexity t(n) :- I'm reading this book about algorithms and as an example how to get the T(n) for .... like the selection Sort Algorithm Like if I'm dealing with the selectionSort(A[0..n-1]) //sorts a given array by selection sort //input: An array A[0..n - 1] of orderable elements. //output: Array A[0..n-1] sorted in ascending order let me write a pseudocode for i <----0 to n-2 do min<--i for j<--i+1 to n-1 do ifA[j]<A[min] min <--j swap A[i] and A[min] -------

Time complexity versus space complexity in Turing machines

戏子无情 提交于 2020-01-03 20:59:23
问题 I think defenitions of time complexity and space complexity for Turing machines are identical and I can't differentiate between them. Please help me. Thanks. 回答1: With regards to a Turing machine, time complexity is a measure of how many times the tape moves when the machine is started on some input. Space complexity refers to how many cells of the tape are written to when the machine runs. The time complexity of a TM is connected to its space complexity. In particular, if tue space

Computational Complexity of SIFT descriptor?

我们两清 提交于 2020-01-03 19:22:12
问题 The SIFT descriptor is a local descriptor that introduced by David Lowe. This descriptor can be splitted up into multiple parts: 1- Constructing a scale space 2- LoG Approximation 3- Finding keypoints 4- Get rid of bad key points 5- Assigning an orientation to the keypoints 6- Generate SIFT features So, my question is: What is the computational complexity of SIFT descriptor? something like O(2n+logn) 回答1: Here's a paper that talks exactly about this. The actual time complexity for a n by n