complexity-theory

GCD algorithms for a large integers

可紊 提交于 2019-12-12 08:56:27
问题 I looking for the information about fast GCD computation algorithms. Especially, I would like to take a look at the realizations of that. The most interesting for me: - Lehmer GCD algorithm, - Accelerated GCD algorithm, - k-ary algorithm, - Knuth-Schonhage with FFT. I have completely NO information about the accelerated GCD algorithm, I just have seen a few articles where it was mentioned as the most effective and fast gcd computation method on the medium inputs (~1000 bits) They looks much

C++0x issue: Constant time insertion into std::set

流过昼夜 提交于 2019-12-12 08:27:59
问题 According to this page, I can achieve constant time insertion if I use iterator std::set::insert ( iterator position, const value_type& x ); and the position iterator I provide directly "precedes" the proper (in-order) insertion point. Now the case I'm concerned with is if I know that the value I'm inserting goes at the end (since it's the largest), e.g.: set<int> foo = {1, 2, 3}; foo.insert(4); // this is an inefficient insert According to the above criterion I should pass the last element

Does Big O Measure Memory Requirments Or Just Speed?

╄→尐↘猪︶ㄣ 提交于 2019-12-12 08:18:25
问题 I often here people talk about Big O which measures algorithms against each other Does this measure clock cycles or space requirements. If people want to contrast algorithms based on memory usage what measure would they use 回答1: If someone says "This algorithm runs in O(n) time", he's talking about speed. If someone says "This algorithm runs in O(n) space", he's talking about memory. If he just says "This algorithm is O(n)", he's usually talking about speed (though if he says it during a

Uses of Ackermann function?

和自甴很熟 提交于 2019-12-12 07:52:23
问题 In our discrete mathematics course in my university, the teacher shows his students the Ackermann function and assign the student to develop the function on paper. Beside being a benchmark for recursion optimisation, does the Ackermann function has any real uses ? 回答1: Yes. The (inverse) Ackermann function appears in complexity analysis of algorithms. When it does, it means you can almost ignore that term since it grows so slowly (a lot like log(log ... log(n)...)) i.e. lg*(n). For example:

How can I print integer in triangle form

夙愿已清 提交于 2019-12-12 07:45:53
问题 I want to print integer in triangle form which look like this 1 121 12321 I tried this but I do not get the actual result for($i=1;$i<=3;$i++) { for($j=3;$j>=$i;$j--) { echo "  "; } for($k=1;$k<=$i;$k++) { echo $k; } if($i>1) { for($m=$i; $m>=1; $m--) { echo $m; } } echo "<br>"; } Output of this code is: 1 1221 123321 Where am I going wrong, please guide me. 回答1: Another integer solution: $n = 9; print str_pad ("✭",$n," ",STR_PAD_LEFT) . PHP_EOL; for ($i=0; $i<$n; $i++){ print str_pad ("", $n

Where can I find the time and space complexity of the built-in sequence types in Python

a 夏天 提交于 2019-12-12 07:42:05
问题 I've been unable to find a source for this information, short of looking through the Python source code myself to determine how the objects work. Does anyone know where I could find this online? 回答1: Checkout the TimeComplexity page on the py dot org wiki. It covers set/dicts/lists/etc at least as far as time complexity goes. 回答2: Raymond D. Hettinger does an excellent talk (slides) about Python's built-in collections called 'Core Python Containers - Under the Hood'. The version I saw

Maximum subarray problem brute force complexity

有些话、适合烂在心里 提交于 2019-12-12 06:50:48
问题 What is the runtime/memory complexity of the Maximum subarray problem using brute force? Can they be optimized more? Especially the memory complexity? Thanks, 回答1: Brute force is Omega(n^2). Using Divide and conquer you can do it with Theta(n lg n) complexity. Further details are available in many books such as Introduction to Algorithms, or in various resources on the Web, such as this lecture. 回答2: As suggested in this answer you can use Kadane's algorithm which has O(n) complexity. An

Complexity in Dijkstras algorithm

拥有回忆 提交于 2019-12-12 04:36:42
问题 So I've been attempting to analyze a specialized variant of Dijkstras algorithm that I've been working on. I'm after the worst case complexity. The algorithm uses a Fibonacci Heap which in the case of the normal Dijkstra would run in O(E + V log V). However this implementation needs to do a lookup in the inner loop where we update neighbours. This lookup will execute for every edge and will be in logarithmic time, where the lookup is in a datastructure that contains all edges. Also the graph

how can i figure the order of complexity?

允我心安 提交于 2019-12-12 02:33:36
问题 I think I know the complexity of these 2 codes but I simply cant find the right equations to prove it. The first one I assume is O(loglogn). The second one is O(n^2). def f1(lst): i=2 while i<len(lst): print(lst[i]) i **= 2 the second code: def f2(lst): i = len(lst) while i>0: for j in range(i): for k in range(10**5, j, -5): print(i) i -= 2 回答1: I think you can try to get the recursive equation first, and then use master theorem or something else to solve the recursive equations. For the

How do I write Big O-notations

走远了吗. 提交于 2019-12-12 01:33:51
问题 I don´t really know how to express in big O-notation. I´ve seen several sources talking about this but it only made me more uncertain. When I write in big-O should I just ignore the constants? examples: 1. 0.02N³ 2. 4N*log(2^N) 3. 24Nlog(N) 4. N² 5. N*sqrt(N) this is what I mean with "ignore the constants": 1. O(N³) 2. O( N*log(2^N) ) 3. O( Nlog(N) ) 4. O( N² ) 5. O( N*sqrt(N) ) and how fast are O( N*log(2^N) ) and O( N*sqrt(N) ) growing compared to the other examples? I really appreciate the