complexity-theory

Finding the minimum in an unsorted array in logarithmic time

南楼画角 提交于 2019-12-10 20:38:38
问题 Is there an algorithmic approach to find the minimum of an unsorted array in logarithmic time ( O(logn) )? Or is it only possible in linear time? I don't want to go parallel. Thanks Michael 回答1: If the list is unsorted, your search has to be at least linear. You must look at each item at least once, because anything you haven't looked at might be less than what you've already seen. 回答2: It is not possibly in linear time, because in lg n steps you can only inspect lg n elements, and because

Is this function in the complexity?

无人久伴 提交于 2019-12-10 19:26:35
问题 I am not sure about the following question: Is log a (n b ) in O(log b (n a )) for constants a, b? 回答1: When asked if a function f(x) is in O(g(x)) it really compares the rate of growth of those two functions. (see wikipedia: http://en.wikipedia.org/wiki/Big_O_notation) Constant factors of the functions are ignored so 2x is in O(x). Also components of the function that have lower growth rates are similarly ignored so 2x^2 + x + 1 is in O(x^2). So the question is: does loga n^b have a similar

Time complexity using N way Merge

早过忘川 提交于 2019-12-10 18:25:32
问题 I was going over the 2 way merge sort algorithm and was thinking if by reducing the merge passes can we get better gain in terms of time. E.g in a 2 way merge we have the following recurrence: T(n) = 2T(n/2) + O(n) and this has a time complexity of N.log-base2(N) if I divide the problem by 4 and merge 4 sub arrays I will get T(n) = 4T(n/4) + O(n) and this should have a time complexity of N.log-base4(N) Since, the number of merge passes has reduced, should this be something to consider when

Calculating the complexity of Levenshtein Edit Distance

我的未来我决定 提交于 2019-12-10 16:48:17
问题 I have been looking at this simple python implementation of Levenshtein Edit Distance for all day now. def lev(a, b): """Recursively calculate the Levenshtein edit distance between two strings, a and b. Returns the edit distance. """ if("" == a): return len(b) # returns if a is an empty string if("" == b): return len(a) # returns if b is an empty string return min(lev(a[:-1], b[:-1])+(a[-1] != b[-1]), lev(a[:-1], b)+1, lev(a, b[:-1])+1) From: http://www.clear.rice.edu/comp130/12spring

computational complexity of convolution

拈花ヽ惹草 提交于 2019-12-10 16:47:45
问题 I read that the computational complexity of the general convolution algorithm is O(n^2) , while by means of the FFT is O(n log n) . What about convolution in 2-D and 3-D? Any reference? 回答1: As for two- and three-dimensional convolution and Fast Fourier Transform the complexity is following: 2D 3D Convolution O(n^4) O(n^6) FFT O(n^2 log^2 n) O(n^3 log^3 n) Reference: Slides on Digital Image Processing, slide no. 34. 来源: https://stackoverflow.com/questions/16164749/computational-complexity-of

List with amortized constant or logarithmic insertion: how fast can possibly be lookup?

我们两清 提交于 2019-12-10 15:35:58
问题 Everybody knows (or must know), that it is impossible to design a list data structure that supports both O(1) insertion in the middle, and O(1) lookup. For instance, linked list support O(1) insertion, but O(N) for lookup, while arrays support O(1) for lookup, but O(N) for insertion (possibly amortized O(1) for insertion at the beginning, the end, or both). However, suppose you are willing to trade O(1) insertion with: Amortized O(1) insertion O(log(N)) insertion Then what is the theoretical

Is this function O(N+M) or O(N*M)?

自闭症网瘾萝莉.ら 提交于 2019-12-10 15:34:40
问题 def solution(M, A): result = [0] * M maxCount = 0 setAll = 0 for i in range(0,len(A)): if (A[i] == M + 1): setAll += maxCount maxCount = 0 result = [0] * M else: result[A[i] - 1] += 1 if (result[A[i] - 1] > maxCount): maxCount = result[A[i] - 1] for j in range(0,len(result)): result[j] += setAll return result A = [ 1, 1, 1, 1, 2, 3] M = 2 print solution(M, A) # result = [ 4, 4 ] A = [ 1, 2, 2, 4, 1, 1] M = 3 print solution(M, A) # result = [ 4, 2, 2 ] By my count, solution() loops through A N

Shuffling a sorted array

≯℡__Kan透↙ 提交于 2019-12-10 15:08:30
问题 If we are given an array that is sorted, what algorithm can we use to create an output array that has the same elements as the sorted array, but the elements should be randomly shuffled. I am looking for an algorithm that has a complexity of O(n) 回答1: Collections.shuffle(List) has an O(n) time complexity. You can use Arrays.asList() to wrap the array so you can use this function. What it does is; For each element from the last to the second element, swap the element with a random element from

Most frequent character in range

大城市里の小女人 提交于 2019-12-10 13:09:11
问题 I have a string s of length n . What is the most efficient data structure / algorithm to use for finding the most frequent character in range i..j ? The string doesn't change over time, I just need to repeat queries that ask for the most frequent char among s[i] , s[i + 1] , ... , s[j] . 回答1: An array in which you hold the number of occurences of each character. You increase the respective value while iterating throught the string once. While doing this, you can remember the current max in

time complexity of relation T(n) = T(n-1) + T(n/2) + n

ε祈祈猫儿з 提交于 2019-12-10 12:59:16
问题 for the relation T(n) = T(n-1) + T(n/2) + n can I first solve the term (T(n-1) + n) which gives O(n^2), then solve the term T(n/2) + O(n^2) ? according to the master theorem which also gives O(n ^ 2) or it is wrong? 回答1: No, you cannot solve it using Mater-theorem. You need to solve it using Akra–Bazzi method, a cleaner generalization of the well-known master theorem. Master-theorem assumes that the sub-problems have equal size. The master theorem concerns recurrence relations of the form T(n