complexity-theory

Do iterative and recursive versions of an algorithm have the same time complexity?

假如想象 提交于 2019-12-17 18:44:04
问题 Say, for example, the iterative and recursive versions of the Fibonacci series. Do they have the same time complexity? 回答1: The answer depends strongly on your implementation. For the example you gave there are several possible solutions and I would say that the naive way to implement a solution has better complexity when implemented iterative. Here are the two implementations: int iterative_fib(int n) { if (n <= 2) { return 1; } int a = 1, b = 1, c; for (int i = 0; i < n - 2; ++i) { c = a +

What is O(log* N)?

独自空忆成欢 提交于 2019-12-17 17:26:39
问题 What is O(log* N) ? I know big-Oh, the log* is unknown. 回答1: O( log* N ) is "iterated logarithm": In computer science, the iterated logarithm of n, written log* n (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1. 回答2: The log* N bit is an iterated algorithm which grows very slowly, much slower than just log N . You basically just keep iteratively 'logging' the answer until it gets below one (E.g:

How to understand the knapsack problem is NP-complete?

安稳与你 提交于 2019-12-17 17:24:06
问题 We know that the knapsack problem can be solved in O(nW) complexity by dynamic programming. But we say this is a NP-complete problem. I feel it is hard to understand here. (n is the number of items. W is the maximum volume.) 回答1: O(n*W) looks like a polynomial time, but it is not , it is pseudo-polynomial. Time complexity measures the time that an algorithm takes as a function of the length in bits of its input. The dynamic programming solution is indeed linear in the value of W , but

Linear time majority algorithm?

 ̄綄美尐妖づ 提交于 2019-12-17 15:37:37
问题 Can anyone think of a linear time algorithm for determining a majority element in a list of elements? The algorithm should use O(1) space. If n is the size of the list, a majority element is an element that occurs at least ceil(n / 2) times. [Input] 1, 2, 1, 1, 3, 2 [Output] 1 [Editor Note] This question has a technical mistake. I preferred to leave it so as not to spoil the wording of the accepted answer, which corrects the mistake and discusses why. Please check the accepted answer. 回答1: I

Given a set of points, find if any of the three points are collinear

独自空忆成欢 提交于 2019-12-17 15:33:11
问题 What is the best algorithm to find if any three points are collinear in a set of points say n. Please also explain the complexity if it is not trivial. Thanks Bala 回答1: If you can come up with a better than O(N^2) algorithm, you can publish it! This problem is 3-SUM Hard, and whether there is a sub-quadratic algorithm (i.e. better than O(N^2)) for it is an open problem. Many common computational geometry problems (including yours) have been shown to be 3SUM hard and this class of problems is

Explanation of Algorithm for finding articulation points or cut vertices of a graph

送分小仙女□ 提交于 2019-12-17 15:29:02
问题 I have searched the net and could not find any explanation of a DFS algorithm for finding all articulation vertices of a graph. There is not even a wiki page. From reading around, I got to know the basic facts from here. PDF There is a variable at each node which is actually looking at back edges and finding the closest and upmost node towards the root node. After processing all edges it would be found. But I do not understand how to find this down & up variable at each node during the

Computational complexity of base conversion

人盡茶涼 提交于 2019-12-17 14:13:10
问题 What is the complexity of converting a very large n-bit number to a decimal representation? My thought is that the elementary algorithm of repeated integer division, taking the remainder to get each digit, would have O(M(n)log n) complexity, where M(n) is the complexity of the multiplication algorithm; however, the division is not between 2 n-bit numbers but rather 1 n-bit number and a small constant number, so it seems to me the complexity could be smaller. 回答1: Naive base-conversion as you

calculating the number of “inversions” in a permutation

我与影子孤独终老i 提交于 2019-12-17 10:38:01
问题 Let A be an array of size N . we call a couple of indexes (i,j) an "inverse" if i < j and A[i] > A[j] I need to find an algorithm that receives an array of size N (with unique numbers) and return the number of inverses in time of O(n*log(n)) . 回答1: You can use the merge sort algorithm. In the merge algorithm's loop, the left and right halves are both sorted ascendingly, and we want to merge them into a single sorted array. Note that all the elements in the right side have higher indexes than

Hashtable in C++?

廉价感情. 提交于 2019-12-17 10:17:03
问题 I usually use C++ stdlib map whenever I need to store some data associated with a specific type of value (a key value - e.g. a string or other object). The stdlib map implementation is based on trees which provides better performance (O(log n)) than the standard array or stdlib vector. My questions is, do you know of any C++ "standard" hashtable implementation that provides even better performance (O(1))? Something similar to what is available in the Hashtable class from the Java API. 回答1: If

Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?

走远了吗. 提交于 2019-12-17 10:16:23
问题 Assume I have two algorithms: for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { //do something in constant time } } This is naturally O(n^2) . Suppose I also have: for (int i = 0; i < 100; i++) { for (int j = 0; j < n; j++) { //do something in constant time } } This is O(n) + O(n) + O(n) + O(n) + ... O(n) + = O(n) It seems that even though my second algorithm is O(n) , it will take longer. Can someone expand on this? I bring it up because I often see algorithms where they will, for