complexity-theory

In Complexity Analysis why is ++ considered to be 2 operations?

送分小仙女□ 提交于 2019-12-01 20:10:36
In my Computer Science II class, the professor considers ++,--,*=, etc. to be 2 operations. However, at the Assembly level this is not really two operations. Can someone explain or is this just for the sake of simplicity? I'd actually consider it to be 3 operations: read, increment (or whatever), write. That's assuming it's reading from some sort of shared memory into some sort of local storage (e.g. register or stack), operating on the local storage, then writing back. How many operations it is at assembly level will depend on what you're incrementing, the platform, the hardware etc. Because

Find the unduplicated element in a sorted array

巧了我就是萌 提交于 2019-12-01 17:46:29
Source : Microsoft Interview Question Given a sorted array, in which every element is present twice except one which is present single time, we need to find that element. Now a standard O(n) solution is to do a XOR of list, which will return the unduplicated element (since all duplicated elements cancel out.) Is it possible to solve this more quickly if we know the array is sorted? Yes, you can use the sortedness to reduce the complexity to O(log n) by doing a binary search. Since the array is sorted, before the missing element, each value occupies the spots 2*k and 2*k+1 in the array

Should I consider memmove() O(n) or O(1)?

倖福魔咒の 提交于 2019-12-01 16:08:52
this may be a silly question, but I want to calculate the complexity of one of my algorithms, and I am not sure what complexity to consider for the memmove() function. Can you please help / explain ? void * memmove ( void * destination, const void * source, size_t num ); So is the complexity O(num) or O(1). I suppose it's O(num), but I am not sure as I lack for now the understanding of what's going on under the hood. Since the running time of memmove increases in direct proportionality with the number of bytes it is required to move, it is O(n). What are you applying the memmove() operation to

Why can't the median-of-medians algorithm use block size 3?

眉间皱痕 提交于 2019-12-01 16:06:47
I am Working through the analysis of deterministic median finding under the assumption that the input is divided into 3 parts rather than 5 and the question is Where does it break down? the deterministic median finding algorithm: SELECT(i, n) Divide the n elements into groups of 5. Find the median of each 5-element group by rote. Recursively SELECT the median x of the ⎣n/5⎦ group medians to be the pivot. Partition around the pivot x. Let k = rank(x) 4.if i = k then return x elseif i < k then recursively SELECT the ith smallest element in the lower part else recursively SELECT the (i–k)th

What is complexity of size() for TreeSet portion view in Java

限于喜欢 提交于 2019-12-01 15:58:32
I'm wondering what is the time complexity of size() for portion view of TreeSet. Let say I'm adding random numbers to set (and I do not care about duplicities): final TreeSet<Integer> tree = new TreeSet<Integer>(); final Random r = new Random(); final int N = 1000; for ( int i = 0; i < N; i++ ) { tree.add( r.nextInt() ); } and now I'm wodering what is complexity for size() calls as: final int M = 100; for ( int i = 0; i < M; i++ ) { final int f = r.nextInt(); final int t = r.nextInt(); System.out.println( tree.headSet( t ).size() ); System.out.println( tree.tailSet( f ).size() ); if ( f > t )

Should I consider memmove() O(n) or O(1)?

北慕城南 提交于 2019-12-01 15:05:34
问题 this may be a silly question, but I want to calculate the complexity of one of my algorithms, and I am not sure what complexity to consider for the memmove() function. Can you please help / explain ? void * memmove ( void * destination, const void * source, size_t num ); So is the complexity O(num) or O(1). I suppose it's O(num), but I am not sure as I lack for now the understanding of what's going on under the hood. 回答1: Since the running time of memmove increases in direct proportionality

Why are constants ignored in asymptotic analysis?

喜欢而已 提交于 2019-12-01 11:17:42
Why are constants ignored in asymptotic analysis? Constant factors are ignored because running time and memory consumption (the two properties most often measured using the O-notation) are much harder to reason about when considering constant factors. If we define U( f(n) ) to be the set of all function g for which there exists an N such that for all N > n: g(n) <= f(n) (i.e. the same as O without the constant factor), it is much harder to show that an algorithm's running time is in U( f(n) ) than O( f(n) ) . For one thing, we'd need an exact unit for measuring running time. Using a CPU

Bin Packing: Set amount on bins, want to minimize the max bin weight

风流意气都作罢 提交于 2019-12-01 10:52:13
Given n bins of infinite capacity, I want to pack m items into them (each with a specific weight), whilst minimizing the weight of the heaviest bin. This isn't a traditional bin packing / knapsack problem where a bin has a finite capacity and you attempt to minimize the amount of bins used; I have a set amount of bins and want to use them all in order to make the heaviest bin's weight as low as possible. Is there a name for this problem? I have looked through a number of papers with key words, but I have found nothing similar. Cheers. If the amount of bins is the constraint, instead of the

Why are constants ignored in asymptotic analysis?

僤鯓⒐⒋嵵緔 提交于 2019-12-01 09:11:05
问题 Why are constants ignored in asymptotic analysis? 回答1: Constant factors are ignored because running time and memory consumption (the two properties most often measured using the O-notation) are much harder to reason about when considering constant factors. If we define U( f(n) ) to be the set of all function g for which there exists an N such that for all N > n: g(n) <= f(n) (i.e. the same as O without the constant factor), it is much harder to show that an algorithm's running time is in U( f

Bin Packing: Set amount on bins, want to minimize the max bin weight

穿精又带淫゛_ 提交于 2019-12-01 08:17:37
问题 Given n bins of infinite capacity, I want to pack m items into them (each with a specific weight), whilst minimizing the weight of the heaviest bin. This isn't a traditional bin packing / knapsack problem where a bin has a finite capacity and you attempt to minimize the amount of bins used; I have a set amount of bins and want to use them all in order to make the heaviest bin's weight as low as possible. Is there a name for this problem? I have looked through a number of papers with key words