complexity-theory

Python heapq vs. sorted complexity and performance

﹥>﹥吖頭↗ 提交于 2019-11-28 23:39:09
I'm relatively new to python (using v3.x syntax) and would appreciate notes regarding complexity and performance of heapq vs. sorted. I've already implemented a heapq based solution for a greedy 'find the best job schedule' algorithm. But then I've learned about the possibility of using 'sorted' together with operator.itemgetter() and reverse=True. Sadly, I could not find any explanation on expected complexity and/or performance of 'sorted' vs. heapq. If you use binary heap to pop all elements in order, the thing you do is basically heapsort . It is slower than sort algorightm in sorted

Why are NP problems called that way (and NP-hard and NP-complete)?

巧了我就是萌 提交于 2019-11-28 23:22:10
Really.. I'm having the last test for graduation this Tuesday, and that's one of the things I just never could understand. I realize that a solution for NP problem can be verfied in polynomial time. But what does determinism has to do with that? And if you could explain me where NP-complete and NP-hard got their names, that would be great (I'm pretty sure I get the meaning of them, I just don't see what their names have to do with what they are). Sorry if that's trivial, I just can't seem to get it (-: Thanks all! P Class of all problems which can be solved by a deterministic Turing machine in

Big O, what is the complexity of summing a series of n numbers?

好久不见. 提交于 2019-11-28 22:57:33
I always thought the complexity of: 1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2). But today I read from a textbook, "by the formula for the sum of the first n integers, this is n(n+1)/2" and then thus: (1/2)n^2 + (1/2)n, and thus O(n^2). What am I missing here? The big O notation can be used to determine the growth rate of any function. In this case, it seems the book is not talking about the time complexity of computing the value, but about the value itself. And n(n+1)/2 is O(n^2) . n(n+1)/2 is the quick way to sum a consecutive sequence of N integers (starting

Fast weighted random selection from very large set of values

元气小坏坏 提交于 2019-11-28 22:56:16
问题 I'm currently working on a problem that requires the random selection of an element from a set. Each of the elements has a weight(selection probability) associated with it. My problem is that for sets with a small number of elements say 5-10, the complexity (running time) of the solution I was is acceptable, however as the number of elements increases say for 1K or 10K etc, the running time becomes unacceptable. My current strategy is: Select random value X with range [0,1) Iterate elements

How do you calculate cyclomatic complexity for R functions?

假如想象 提交于 2019-11-28 20:42:18
Cyclomatic complexity measures how many possible branches can be taken through a function. Is there an existing function/tool to calculate it for R functions? If not, suggestions are appreciated for the best way to write one. A cheap start towards this would be to count up all the occurences of if , ifelse or switch within your function. To get a real answer though, you need to understand when branches start and end, which is much harder. Maybe some R parsing tools would get us started? Also, I just found a new package called cyclocomp (released 2016). Check it out! You can use codetools:

Example of a factorial time algorithm O( n! )

前提是你 提交于 2019-11-28 20:11:51
I'm studying time complexity in school and our main focus seems to be on polynomial time O(n^c) algorithms and quasi-linear time O(nlog(n)) algorithms with the occasional exponential time O(c^n) algorithm as an example of run-time perspective. However, dealing with larger time complexities was never covered. I would like to see an example problem with an algorithmic solution that runs in factorial time O(n!) . The algorithm may be a naive approach to solve a problem but cannot be artificially bloated to run in factorial time. Extra street-cred if the factorial time algorithm is the best known

Did you apply computational complexity theory in real life?

╄→гoц情女王★ 提交于 2019-11-28 18:59:12
问题 I'm taking a course in computational complexity and have so far had an impression that it won't be of much help to a developer. I might be wrong but if you have gone down this path before, could you please provide an example of how the complexity theory helped you in your work? Tons of thanks. 回答1: O(1): Plain code without loops. Just flows through. Lookups in a lookup table are O(1), too. O(log(n)): efficiently optimized algorithms. Example: binary tree algorithms and binary search. Usually

How do backreferences in regexes make backtracking required?

喜夏-厌秋 提交于 2019-11-28 18:48:13
I read http://swtch.com/~rsc/regexp/regexp1.html and in it the author says that in order to have backreferences in regexs, one needs backtracking when matching, and that makes the worst-case complexity exponential. But I don't see exactly why backreferences introduce the need for backtracking. Can someone explain why, and perhaps provide an example (regex and input)? To get directly at your question, you should make a short study of the Chomsky Hierarchy . This is an old and beautiful way of organizing formal languages in sets of increasing complexity. The lowest rung of the hierarchy is the

Learning efficient algorithms

≡放荡痞女 提交于 2019-11-28 18:09:23
问题 Up until now I've mostly concentrated on how to properly design code, make it as readable as possible and as maintainable as possible. So I alway chose to learn about the higher level details of programming, such as class interactions, API design, etc. Algorithms I never really found particularly interesting. As a result, even though I can come up with a good design for my programs, and even if I can come up with a solution to a given problem it rarely is the most efficient. Is there a

why does accessing an element in an array take constant time?

≡放荡痞女 提交于 2019-11-28 17:25:23
问题 Lets say I have an array as: int a[]={4,5,7,10,2,3,6} when I access an element, such as a[3], what does it actually happen behind the scene? Why does many algorithm books (such as the Cormen book...) say that it takes a constant time? (I'm just a noob in low-level programing so I would like to learn more from you guys) 回答1: Just to be complete, "what structure is accessed in linear time?" A Linked List structure is accessed in linear time. To get the n element you have to travel through n-1