complexity-theory

C Directed Graph Implementation Choice

蹲街弑〆低调 提交于 2019-12-03 13:25:33
Welcome mon amie , In some homework of mine, I feel the need to use the Graph ADT. However, I'd like to have it, how do I say, generic . That is to say, I want to store in it whatever I fancy. The issue I'm facing, has to do with complexity. What data structure should I use to represent the set of nodes? I forgot to say that I already decided to use the Adjacency list technic . Generally, textbooks mention a linked list, but, it is to my understanding that whenever a linked list is useful and we need to perform searches, a tree is better . But then again, what we need is to associate a node

Partition Problems Brute Force Algorithm

…衆ロ難τιáo~ 提交于 2019-12-03 13:03:53
问题 I am trying to do the pseudocode for the partition problem below in bruteforce. a set of integers X and an integer k (k >1). Find k subsets of X such that the numbers in each subset sum to the same amount and no two subsets have an element in common, or conclude that no such k subsets exist. The problem is NP-Complete For example, with X = {2, 5, 4, 9, 1, 7, 6, 8} and k = 3, a possible solution would be: {2, 5, 7}, {4, 9, 1}, {6, 8} because all of them sum up to 14. for exhaustive search I

In-order traversal complexity in a binary search tree (using iterators)?

泄露秘密 提交于 2019-12-03 12:06:29
问题 Related question: Time Complexity of InOrder Tree Traversal of Binary Tree O(N)?, however it is based on a traversal via recursion (so in O(log N) space) while iterators allow a consumption of only O(1) space. In C++, there normally is a requirement that incrementing an iterator of a standard container be a O(1) operation. With most containers it's trivially proved, however with map and such, it seems a little more difficult. If a map were implemented as a skip-list, then the result would be

Uses of Ackermann function?

可紊 提交于 2019-12-03 11:53:17
In our discrete mathematics course in my university, the teacher shows his students the Ackermann function and assign the student to develop the function on paper. Beside being a benchmark for recursion optimisation, does the Ackermann function has any real uses ? Yes. The (inverse) Ackermann function appears in complexity analysis of algorithms. When it does, it means you can almost ignore that term since it grows so slowly (a lot like log(log ... log(n)...)) i.e. lg*(n). For example: Minimum Spanning Trees (also here ) and Disjoint Set forest construction. Also: Davenport-Scinzel sequences

Unexpected complexity of common methods (size) in Java Collections Framework?

妖精的绣舞 提交于 2019-12-03 11:41:37
Recently, I've been surprised by the fact that some Java collections don't have constant time operation of method size(). While I learned that concurrent implementations of collections made some compromises as a tradeoff for gain in concurrency (size being O(n) in ConcurrentLinkedQueue, ConcurrentSkipListSet, LinkedTransferQueue, etc.) good news is that this is properly documented in API documentation. What concerned me is the performance of method size on views returned by some collections' methods. For example, TreeSet.tailSet returns a view of the portion of backing set whose elements are

How to improve Cyclomatic Complexity?

試著忘記壹切 提交于 2019-12-03 11:04:07
Cyclomatic Complexity will be high for methods with a high number of decision statements including if/while/for statements. So how do we improve on it? I am handling a big project where I am supposed to reduced the CC for methods that have CC > 10. And there are many methods with this problem. Below I will list down some eg of code patterns (not the actual code) with the problems I have encountered. Is it possible that they can be simplified? Example of cases resulting in many decision statements: Case 1) if(objectA != null) //objectA is a pass in as a parameter { objectB = doThisMethod(); if

NP-hard problems that are not NP-complete are harder?

你说的曾经没有我的故事 提交于 2019-12-03 10:44:51
问题 From my understanding, all NP-complete problems are NP-hard but some NP-hard problems are known not to be NP-complete, and NP-hard problems are at least as hard as NP-complete problems. Is that mean NP-hard problems that are not NP-complete are harder? And how it is harder? 回答1: To answer this question, you first need to understand which NP-hard problems are also NP-complete. If an NP-hard problem belongs to set NP, then it is NP-complete. To belong to set NP, a problem needs to be (i) a

Understanding How Many Times Nested Loops Will Run

与世无争的帅哥 提交于 2019-12-03 10:03:30
问题 I am trying to understand how many times the statement "x = x + 1" is executed in the code below, as a function of "n": for (i=1; i<=n; i++) for (j=1; j<=i; j++) for (k=1; k<=j; k++) x = x + 1 ; If I am not wrong the first loop is executed n times, and the second one n(n+1)/2 times, but on the third loop I get lost. That is, I can count to see how many times it will be executed, but I can't seem to find the formula or explain it in mathematical terms. Can you? By the way this is not homework

Data structure for O(log N) find and update, considering small L1 cache

ε祈祈猫儿з 提交于 2019-12-03 09:58:54
I'm currently working on an embedded device project where I'm running into performance problems. Profiling has located an O(N) operation that I'd like to eliminate. I basically have two arrays int A[N] and short B[N] . Entries in A are unique and ordered by external constraints. The most common operation is to check if a particular value a appears in A[] . Less frequently, but still common is a change to an element of A[] . The new value is unrelated to the previous value. Since the most common operation is the find, that's where B[] comes in. It's a sorted array of indices in A[] , such that

Why is the complexity of A* exponential in memory?

旧街凉风 提交于 2019-12-03 09:36:55
问题 Wikipedia says on A* complexity the following (link here): More problematic than its time complexity is A*’s memory usage. In the worst case, it must also remember an exponential number of nodes. I fail to see this is correct because: Say we explore node A, with successors B, C, and D. Then we add B, C, and D to the list of open nodes, each accompanied by a reference to A, and we move A from the open nodes to the closed nodes. If at some time we find another path to B (say, via Q), that is