complexity-theory

Leibniz determinant formula complexity

[亡魂溺海] 提交于 2021-01-28 00:31:12
问题 I wrote some code that calculates the determinant of a given nxn matrix using Leibniz formula for determinants. I am trying to figure out it's complexity in O-notation. I think it should be something like: O(n!) * O(n^2) + O(n) = O(n!*n^2) or O((n+2)!) Reasoning: I think O(n!) is the complexity of permutations. and O(n) the complexity of perm_parity, and O(n^2) is the multiplication of n items each iteration. this is my code: def determinant_leibnitz(self): assert self.dim()[0] == self.dim()

How efficient is Python's 'in' or 'not in' operators?

狂风中的少年 提交于 2021-01-28 00:21:54
问题 I have a list of over 100000 values and I am iterating over these values and checking if each one is contained in another list of random values (of the same size). I am doing this by using if item[x] in randomList . How efficient is this? Does python do some sort of hashing for each container or is it internally doing a straight up search of the other container to find the element I am looking for? Also, if it does this search linearly, then does it create a dictionary of the randomList and

Can I check whether a bounded list contains duplicates, in linear time?

|▌冷眼眸甩不掉的悲伤 提交于 2021-01-27 12:20:40
问题 Suppose I have an Int list where elements are known to be bounded and the list is known to be no longer than their range, so that it is entirely possible for it not to contain duplicates. How can I test most quickly whether it is the case? I know of nubOrd. It is quite fast. We can pass our list through and see if it becomes shorter. But the efficiency of nubOrd is still not linear. My idea is that we can trade space for time efficiency. Imperatively, we would allocate a bit field as wide as

Asymptotic lower bounding does not work. Why?

房东的猫 提交于 2021-01-07 03:15:04
问题 Ok so this is not a homework question obviously, but here is the thing: The bubble sort algorithm is said to be O(n^2), Ω(n). However, if we plot its time complexity as a graph (average case), and try to lower bound it, a more accurate lower bound would be Ω(n^2). However contextually we see that Ω(n) is correct. So why does lower bounding the algorithm's runtime does not work? 回答1: I think you're mixing up concepts: Lower bound vs upper bound: Ω(f(n)) is a lower bound, and O(f(n)) is an

Asymptotic lower bounding does not work. Why?

天大地大妈咪最大 提交于 2021-01-07 03:14:34
问题 Ok so this is not a homework question obviously, but here is the thing: The bubble sort algorithm is said to be O(n^2), Ω(n). However, if we plot its time complexity as a graph (average case), and try to lower bound it, a more accurate lower bound would be Ω(n^2). However contextually we see that Ω(n) is correct. So why does lower bounding the algorithm's runtime does not work? 回答1: I think you're mixing up concepts: Lower bound vs upper bound: Ω(f(n)) is a lower bound, and O(f(n)) is an

Can O(N+NM) be simplified to O(NM)?

江枫思渺然 提交于 2021-01-05 06:48:16
问题 Suppose that N and M are two parameters of an algorithm. Is the following simplification correct? O(N+NM) = O[N(1+M)] = O(NM) In other words, is it allowed to remove the constant in such a context? 回答1: TL;DR Yes Explanation By the definition of the Big-Oh notation, if a term inside the O(.) is provably smaller than a constant times another term for all sufficiently large values of the variable, then you can drop the smaller term. You can find a more precise definition of Big-Oh here, but

Can O(N+NM) be simplified to O(NM)?

。_饼干妹妹 提交于 2021-01-05 06:41:28
问题 Suppose that N and M are two parameters of an algorithm. Is the following simplification correct? O(N+NM) = O[N(1+M)] = O(NM) In other words, is it allowed to remove the constant in such a context? 回答1: TL;DR Yes Explanation By the definition of the Big-Oh notation, if a term inside the O(.) is provably smaller than a constant times another term for all sufficiently large values of the variable, then you can drop the smaller term. You can find a more precise definition of Big-Oh here, but

Time Complexity of this double loop

我与影子孤独终老i 提交于 2020-12-13 04:09:45
问题 What will be the time complexity of this code? for(int i = 1 ; i <= b ; ++i ) for(int j = i ; j <= b ; j += i ) 回答1: You can expand the loops to something like this: i = 1 ——> 1,2,3,…,b b i = 2 ——> 1,3,5,…,b (b/2) i = 3 ——> 1,4,7,…,b (b/3) i = 4 ——> 1,5,9,…,b (b/4) … i = b ——> 1, b (b/b = 1) This expands into a sum of the form: b + b/2 + b/3 + … + b/b = b * (1 + 1/2 + 1/3 + … + 1/b) You might recognize the second factor as the Harmonic Series. Then, using the result from the following SO

Time Complexity of this double loop

ぃ、小莉子 提交于 2020-12-13 04:06:30
问题 What will be the time complexity of this code? for(int i = 1 ; i <= b ; ++i ) for(int j = i ; j <= b ; j += i ) 回答1: You can expand the loops to something like this: i = 1 ——> 1,2,3,…,b b i = 2 ——> 1,3,5,…,b (b/2) i = 3 ——> 1,4,7,…,b (b/3) i = 4 ——> 1,5,9,…,b (b/4) … i = b ——> 1, b (b/b = 1) This expands into a sum of the form: b + b/2 + b/3 + … + b/b = b * (1 + 1/2 + 1/3 + … + 1/b) You might recognize the second factor as the Harmonic Series. Then, using the result from the following SO

How is O(N) algorithm also an O(N^2) algorithm?

荒凉一梦 提交于 2020-12-08 06:15:20
问题 I was reading about Big-O Notation So, any algorithm that is O(N) is also an O(N^2). It seems confusing to me, I know that Big-O gives upper bound only. But how can an O(N) algorithm also be an O(N^2) algorithm. Is there any examples where it is the case? I can't think of any. Can anyone explain it to me? 回答1: "Upper bound" means the algorithm takes no longer than (i.e. <= ) that long (as the input size tends to infinity, with relevant constant factors considered). It does not mean it will