time-complexity

Python list.clear complexity

你离开我真会死。 提交于 2020-01-04 02:35:10
问题 What is the complexity of the Python 3 method list.clear() ? It is not given here: https://wiki.python.org/moin/TimeComplexity In the documentation it is said to be equivalent with del a[:] , but I do not know the complexity of this function itself. Is it O(n) or O(1) ? I took a look in listobject.c . Found this. int PyList_ClearFreeList(void) { PyListObject *op; int ret = numfree; while (numfree) { op = free_list[--numfree]; assert(PyList_CheckExact(op)); PyObject_GC_Del(op); } return ret; }

Time complexity of integer comparison in python

烈酒焚心 提交于 2020-01-04 02:13:04
问题 What is the time complexity of integer comparison in Python for very large integers? For example, if we calculate factorial of 1000 using 2 functions, then check equality, is it O(1)? def fact(n): prod = 1 for i in range(n): prod = prod * (i + 1) return prod i = fact(1000) j = fact(1000) # Complexity of this check? if i == j: print "Equal" 回答1: There isn't a simple answer, but the answer is nevertheless obvious ;-) That is, if two integers are in fact equal, it's impossible to know that

Time complexity versus space complexity in Turing machines

戏子无情 提交于 2020-01-03 20:59:23
问题 I think defenitions of time complexity and space complexity for Turing machines are identical and I can't differentiate between them. Please help me. Thanks. 回答1: With regards to a Turing machine, time complexity is a measure of how many times the tape moves when the machine is started on some input. Space complexity refers to how many cells of the tape are written to when the machine runs. The time complexity of a TM is connected to its space complexity. In particular, if tue space

Time complexity of the program using recurrence equation

蓝咒 提交于 2020-01-03 07:01:07
问题 I want to find out the time complexity of the program using recurrence equations. That is .. int f(int x) { if(x<1) return 1; else return f(x-1)+g(x); } int g(int x) { if(x<2) return 1; else return f(x-1)+g(x/2); } I write its recurrence equation and tried to solve it but it keep on getting complex T(n) =T(n-1)+g(n)+c =T(n-2)+g(n-1)+g(n)+c+c =T(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c =T(n-4)+g(n-3)+g(n-2)+g(n-1)+g(n)+c+c+c+c ………………………. …………………….. Kth time ….. =kc+g(n)+g(n-1)+g(n-3)+g(n-4).. .. . … +T(n

Big-O and Omega Notations

若如初见. 提交于 2020-01-02 07:08:20
问题 I was reading this question Big-O notation's definition. But I have less than 50 reputation to comment, so I hope someone help me. My question is about this sentence: There are many algorithms for which there is no single function g such that the complexity is both O(g) and Ω(g). For instance, insertion sort has a Big-O lower bound of O(n²) (meaning you can't find anything smaller than n²) and an Ω upper bound of Ω(n). for large n the O(n²) is an upper bound and Ω(n) is a lower bound, or

Alternative for recursively making recursive calculations in sqlite?

☆樱花仙子☆ 提交于 2020-01-02 05:40:29
问题 I am currently working on a project for the iPhone that requires accessing a large amount of hierarchical data stored in a local sqlite database. One of the more common operations is calculating a rollup status field. Right now, I'm doing that by recursing through all the descendants of that item (which can be anywhere from 1 to n levels deep). However, this ends up requiring a LOT of sql calls. Each sqlite call on an iPhone takes around 250ms to complete, and in the end this adds up to

What is the Computational Complexity of Mathematica's CylindricalDecomposition

寵の児 提交于 2020-01-02 03:32:07
问题 Mathematica' CylindricalDecomposition implements an algorithm known as Cylindrical Algebraic Decomposition. Wolfram MathWorld's article on Cylindrical Algebraic Decomposition says that this algorithm "becomes computationally infeasible for complicated inequalities." Can this statement be made more precise? Specifically, how does the time and space relate to the degree and number of variables of the multivariate polynomials? Does the time and space depend on other parameters? 回答1: Tarski

Find the longest continuum in the array that the sum of the values in the continuum equal to zero modulo 3

≡放荡痞女 提交于 2020-01-01 19:06:05
问题 I wrote a code that finds the longest continuum in the array that the sum of the values in the continuum equal to zero modulo 3, e.g for the array a[]={2,-3,5,7,-20,7} We have 2-3+5+7-20=-9 so the output is 5, My problem is the complexity, now it's O(n^3) a bird whispered me that it can be done in O(n) public class mmn { public static void main(String[] args) { int a[]={2,-3,5,7,-20,7}; int r=what(a); System.out.println(r); } private static int f(int[]a,int low, int high) { int res=0; for

Big O complexity to merge two lists

心不动则不痛 提交于 2020-01-01 15:37:52
问题 Given 2 singly linked lists already sorted, merge the lists. Example: list1: 1 2 3 5 7 list2: 0 4 6 7 10 ---> 0 1 2 3 4 5 6 7 7 10 Despite from the fact that the solution is quite simple and there are several different implementations of the problem with or without using recursion (like this http://www.geeksforgeeks.org/merge-two-sorted-linked-lists/ see Method 3), I was wondering what would be the O big complexity of this implementation: If one of the lists is empty just return the other

Big O complexity to merge two lists

无人久伴 提交于 2020-01-01 15:37:00
问题 Given 2 singly linked lists already sorted, merge the lists. Example: list1: 1 2 3 5 7 list2: 0 4 6 7 10 ---> 0 1 2 3 4 5 6 7 7 10 Despite from the fact that the solution is quite simple and there are several different implementations of the problem with or without using recursion (like this http://www.geeksforgeeks.org/merge-two-sorted-linked-lists/ see Method 3), I was wondering what would be the O big complexity of this implementation: If one of the lists is empty just return the other