time-complexity

Worse is better. Is there an example?

拥有回忆 提交于 2019-12-17 23:29:52
问题 Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations ( worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex

Why is accessing any single element in an array done in constant time ( O(1) )?

我的梦境 提交于 2019-12-17 23:25:46
问题 According to Wikipedia, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. To me, what happens behind the scenes probably looks something like this: a) The search is done linearly (e.g. I want to access element 5. I begin the search at index 0, if it's not equal to 5, I go to index 1 etc.) This is O(n) -- where n is the length of the array b) If the array is stored as a B-tree, this would give O(log n) I see no other approach.

Separate the alphabet and digit such that their relative order remains the same in O(n) time and O(1) space

為{幸葍}努か 提交于 2019-12-17 22:24:19
问题 Given an array [a1b7c3d2] convert to [abcd1732] with O(1) space and O(n) time i.e. put the letters on the left and digits on the right such that their relative order is the same. I can think of an O(nlogn) algorithm, but not better. Can somebody please help? 回答1: AFAIK it can't be done. This is essentially a single step of the RADIX sort algorithm. And AFAIK stable RADIX sort can't be done in-place. edit Wikipedia agrees with me (for what that's worth): http://en.wikipedia.org/wiki/Radix_sort

Is a default value of nullptr in a map of pointers defined behaviour?

拥有回忆 提交于 2019-12-17 20:46:18
问题 The following code seems to always follow the true branch. #include <map> #include <iostream> class TestClass { // implementation } int main() { std::map<int, TestClass*> TestMap; if (TestMap[203] == nullptr) { std::cout << "true"; } else { std::cout << "false"; } return 0; } Is it defined behaviour for an uninitialized pointer to point at nullptr , or an artifact of my compiler? If not, how can I ensure portability of the following code? Currently, I'm using similar logic to return the

Do iterative and recursive versions of an algorithm have the same time complexity?

假如想象 提交于 2019-12-17 18:44:04
问题 Say, for example, the iterative and recursive versions of the Fibonacci series. Do they have the same time complexity? 回答1: The answer depends strongly on your implementation. For the example you gave there are several possible solutions and I would say that the naive way to implement a solution has better complexity when implemented iterative. Here are the two implementations: int iterative_fib(int n) { if (n <= 2) { return 1; } int a = 1, b = 1, c; for (int i = 0; i < n - 2; ++i) { c = a +

Difference between Time Complexity and Running time

左心房为你撑大大i 提交于 2019-12-17 16:26:07
问题 Just wondering if in a question there is talk about the Running time of an algorithm, does it mean the same as Time Complexity or is there any difference between the two? 回答1: Running time is how long it takes a program to run. Time complexity is a description of the asymptotic behavior of running time as input size tends to infinity. You can say that the running time "is" O(n^2) or whatever, because that's the idiomatic way to describe complexity classes and big-O notation. In fact the

What is time complexity of a list to set conversion? [closed]

╄→尐↘猪︶ㄣ 提交于 2019-12-17 16:24:39
问题 Closed . This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 3 years ago . I've noticed the table of the time complexity of set operations on the python official website. But i just wanna ask what's the time complexity of converting a list to a set, for instance, l = [1, 2, 3, 4, 5] s = set(l) I kind of know that this is actually a hash table, but how exactly does it work

Time complexity of power() [duplicate]

社会主义新天地 提交于 2019-12-17 15:40:02
问题 This question already has answers here : The most efficient way to implement an integer based power function pow(int, int) (17 answers) Closed 2 years ago . I implemented this function power() which takes two arguments a and b and computes a b . typedef long long int LL; LL power(int a,int b) { int i = 1; LL pow = 1; for( ; i <= b ; ++i ) pow *= a; return pow; } Given : a b falls in the range of long long int . Problem : How to reduce the time complexity of my algorithm? 回答1: Exponentiation

Computational complexity of base conversion

人盡茶涼 提交于 2019-12-17 14:13:10
问题 What is the complexity of converting a very large n-bit number to a decimal representation? My thought is that the elementary algorithm of repeated integer division, taking the remainder to get each digit, would have O(M(n)log n) complexity, where M(n) is the complexity of the multiplication algorithm; however, the division is not between 2 n-bit numbers but rather 1 n-bit number and a small constant number, so it seems to me the complexity could be smaller. 回答1: Naive base-conversion as you

Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?

走远了吗. 提交于 2019-12-17 10:16:23
问题 Assume I have two algorithms: for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { //do something in constant time } } This is naturally O(n^2) . Suppose I also have: for (int i = 0; i < 100; i++) { for (int j = 0; j < n; j++) { //do something in constant time } } This is O(n) + O(n) + O(n) + O(n) + ... O(n) + = O(n) It seems that even though my second algorithm is O(n) , it will take longer. Can someone expand on this? I bring it up because I often see algorithms where they will, for