complexity-theory

How much to log within an application and how much is too much?

随声附和 提交于 2019-12-09 11:52:50
问题 Just wondering how much people log within their applications??? I have seen this: "I typically like to use the ERROR log level to log any exceptions that are caught by the application. I will use the INFO log level as a "first level" debugging scheme to show whenever I enter or exit a method. From there I use the DEBUG log level to trace detailed information. The FATAL log level is used for any exceptions that I have failed to catch in my web based applications." Which had this code sample

C Directed Graph Implementation Choice

北战南征 提交于 2019-12-09 11:10:01
问题 Welcome mon amie , In some homework of mine, I feel the need to use the Graph ADT. However, I'd like to have it, how do I say, generic . That is to say, I want to store in it whatever I fancy. The issue I'm facing, has to do with complexity. What data structure should I use to represent the set of nodes? I forgot to say that I already decided to use the Adjacency list technic . Generally, textbooks mention a linked list, but, it is to my understanding that whenever a linked list is useful and

Unexpected complexity of common methods (size) in Java Collections Framework?

此生再无相见时 提交于 2019-12-09 08:23:41
问题 Recently, I've been surprised by the fact that some Java collections don't have constant time operation of method size(). While I learned that concurrent implementations of collections made some compromises as a tradeoff for gain in concurrency (size being O(n) in ConcurrentLinkedQueue, ConcurrentSkipListSet, LinkedTransferQueue, etc.) good news is that this is properly documented in API documentation. What concerned me is the performance of method size on views returned by some collections'

convex hull algorithm for 3d surface z = f(x, y)

好久不见. 提交于 2019-12-09 06:59:38
问题 I have a 3D surface given as a set of triples (x_i, y_i, z_i), where x_i and y_i are roughly on a grid, and each (x_i, y_i) has a single associated z_i value. The typical grid is 20x20 I need to find which points belong to the convex hull of the surface, within a given tolerance. I'm looking for an efficient algorithm to perform the computation (my customer has provided an O(n³) version, which takes ~10s on a 400 point dataset...) 回答1: There's quite a lot out there, didn't you search? Here

What are the consequences of saying a non-deterministic Turing Machine can solve NP in polynomial time?

余生颓废 提交于 2019-12-09 04:44:40
问题 these days I have been studying about NP problems, computational complexity and theory. I believe I have finally grasped the concepts of Turing Machine, but I have a couple of doubts. I can accept that a non-deterministic turing machine has several options of what to do for a given state and symbol being read and that it will always pick the best option, as stated by wikipedia How does the NTM "know" which of these actions it should take? There are two ways of looking at it. One is to say

Finding 'bottleneck edges' in a graph

落花浮王杯 提交于 2019-12-09 03:11:19
问题 Given a random unidirected graph, I must find 'bottleneck edges' to get from one vertex to another. What I call 'bottleneck edges' (there must be a better name for that!) -- suppose I have the following unidirected graph: A / | \ B--C--D | | E--F--G \ | / H To get from A to H independently of the chosen path edges BE and DG must always be traversed, therefore making a 'bottleneck'. Is there a polynomial time algorithm for this? edit: yes, the name is 'minimum cut' for what I meant witch

Q: What is Big-O complexity of random.choice(list) in Python3

巧了我就是萌 提交于 2019-12-08 19:53:30
问题 What is Big-O complexity of random.choice(list) in Python3, where n is amount of elements in a list ? edit. Thank You all for give me the answer, now I understand. 回答1: O(1) . Or to be more precise, it's equivalent to the big-O random access time for looking up a single index in whatever sequence you pass it, and list has O(1) random access indexing (as does tuple ). Simplified, all it does is seq[random.randrange(len(seq))], which is obviously equivalent to a single index lookup operation.

How fast is Data.Array?

丶灬走出姿态 提交于 2019-12-08 17:40:17
问题 The documentation of Data.Array reads: Haskell provides indexable arrays, which may be thought of as functions whose domains are isomorphic to contiguous subsets of the integers. Functions restricted in this way can be implemented efficiently; in particular, a programmer may reasonably expect rapid access to the components. I wonder how fast can (!) and (//) be. Can I expect O(1) complexity from these, as I would have from their imperative counterparts? 回答1: In general, yes, you should be

time and space complexity of finding combination (nCr)

谁都会走 提交于 2019-12-08 11:53:56
问题 Whats the worst case time and space complexity of different algorithms to find combination i.e. nCr Which algorithm is the best known solution in terms of time/space complexity? 回答1: O(n!) is the time complexity to generate all combinations one by one. To find how many combinations are there, we can use this formula: nCr = n! / ( r! * (n-r)! ) As @beaker mentioned, this count can be calculated in O(1) time (i.e., constant time). 来源: https://stackoverflow.com/questions/31979545/time-and-space

Why is the Inverse Ackermann function used to describe complexity of Kruskal's algorithm?

妖精的绣舞 提交于 2019-12-08 08:57:01
问题 In a class for analysis of algorithms, we are presented with this pseudocode for Kruskal's algorithm: He then states the following, for disjoint-set forests: A sequence of m MAKE-SET, UNION, and FIND-SET operations, n of which are MAKE-SET operations, can be performed on a disjoint-set forest with union by rank and path compression in worst-case time O(m α(n)) . Used to compute the complexity of Step 2, and steps 5-8 For connected G: |E| ≥ |V| -1; m = O(V + E), n = O(V); So Steps 2, 5-8: O((V