complexity-theory

Why is the Big-O complexity of this algorithm O(n^2)?

断了今生、忘了曾经 提交于 2019-12-02 23:15:12
I know the big-O complexity of this algorithm is O(n^2) , but I cannot understand why. int sum = 0; int i = 1; j = n * n; while (i++ < j--) sum++; Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n ? During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2). Ben Rubin big-O complexity ignores coefficients. For example: O(n) , O(2n) , and O(1000n) are all

Combat strategy for ants

风流意气都作罢 提交于 2019-12-02 23:07:08
This question refers to the Google-sponsored AI Challenge , a contest that happens every few months and in which the contenders need to submit a bot able to autonomously play a game against other robotic players. The competition that just closed was called "ants" and you can read all its specification here , if you are interested. My question is specific to one aspect of ants : combat strategy . The problem Given a grid of discrete coordinates [like a chessboard] and given that each player has a number of ants that at each turn can either: stay still move east / north / west / south, ...an ant

Is O(log n) always faster than O(n)

穿精又带淫゛_ 提交于 2019-12-02 21:59:39
If there are 2 algorthims that calculate the same result with different complexities, will O(log n) always be faster? If so please explain. BTW this is not an assignment question. No. If one algorithm runs in N/100 and the other one in (log N)*100 , then the second one will be slower for smaller input sizes. Asymptotic complexities are about the behavior of the running time as the input sizes go to infinity. No, it will not always be faster. BUT, as the problem size grows larger and larger, eventually you will always reach a point where the O(log n) algorithm is faster than the O(n) one. In

Finding the closest fibonacci numbers

↘锁芯ラ 提交于 2019-12-02 20:44:31
I am trying to solve a bigger problem, and I think that an important part of the program is spent on inefficient computations. I need to compute for a given number N, the interval [P, Q], where P is the biggest fibonacci number that is <= to N, and Q is the smallest fibonacci number that is >= to N. Currently, I am using a map to record the value of the fibonacci numbers. A query normally involves searching all the fibonacci numbers up to N, and it is not very time efficient, as it involves a big number of comparisons. This type of queries will occur quite often in my program, and I am

How to calculate time complexity of backtracking algorithm?

血红的双手。 提交于 2019-12-02 20:32:10
How to calculate time complexity for these backtracking algorithms and do they have same time complexity? If different how? Kindly explain in detail and thanks for the help. 1. Hamiltonian cycle: bool hamCycleUtil(bool graph[V][V], int path[], int pos) { /* base case: If all vertices are included in Hamiltonian Cycle */ if (pos == V) { // And if there is an edge from the last included vertex to the // first vertex if ( graph[ path[pos-1] ][ path[0] ] == 1 ) return true; else return false; } // Try different vertices as a next candidate in Hamiltonian Cycle. // We don't try for 0 as we included

Solving the recurrence relation T(n) = √n T(√n) + n [closed]

£可爱£侵袭症+ 提交于 2019-12-02 18:53:42
Is it possible to solve the recurrence relation T(n) = √n T(√n) + n Using the Master Theorem? It is not of the form T(n) = a ⋅ T(n / b) + f(n) but this problem is given in the exercise of CLRS chapter 4. This cannot be solved by the Master Theorem. However, it can be solved using the recursion tree method to resolve to O(n log log n). The intuition behind this is to notice that at each level of the tree, you're doing n work. The top level does n work explicitly. Each of the √n subproblems does √n work for a net total of n work, etc. So the question now is how deep the recursion tree is. Well,

Are there public key cryptography algorithms that are provably NP-hard to defeat? [closed]

谁都会走 提交于 2019-12-02 18:09:05
Should practical quantum computing become a reality, I am wondering if there are any public key cryptographic algorithms that are based on NP-complete problems, rather than integer factorization or discrete logarithms. Edit: Please check out the "Quantum computing in computational complexity theory" section of the wiki article on quantum computers. It points out that the class of problems quantum computers can answer (BQP) is believed to be strictly easier than NP-complete. Edit 2: 'Based on NP-complete' is a bad way of expressing what I'm interested in. What I intended to ask is for a Public

Complexity's and Run times

此生再无相见时 提交于 2019-12-02 17:02:18
问题 I tried looking around to see if my answer could be answered but I haven't stumbled what could help me. When Dealing with Run Time Complexity's do you account for the operands? From my understanding dealing with run time you have each different operand can take x-amount of time so only counting for the loops with give you the lower bound? If this is incorrect can you please explain to me where my logic is wrong. for example: for (i=0;i<n;i++) for (j=0;j<n;j++) a[i,j]=b[i,j]+c[i,j] Would just

The “pattern-filling with tiles” puzzle

一笑奈何 提交于 2019-12-02 16:48:35
I've encountered an interesting problem while programming a random level generator for a tile-based game. I've implemented a brute-force solver for it but it is exponentially slow and definitely unfit for my use case. I'm not necessarily looking for a perfect solution, I'll be satisfied with a “good enough” solution that performs well. Problem Statement: Say you have all or a subset of the following tiles available (this is the combination of all possible 4-bit patterns mapped to the right, up, left and down directions): alt text http://img189.imageshack.us/img189/3713/basetileset.png You are

Real-world example of exponential time complexity

百般思念 提交于 2019-12-02 16:46:00
I'm looking for an intuitive, real-world example of a problem that takes (worst case) exponential time complexity to solve for a talk I am giving. Here are examples for other time complexities I have come up with (many of them taken from this SO question ): O(1) - determining if a number is odd or even O(log N) - finding a word in the dictionary (using binary search) O(N) - reading a book O(N log N) - sorting a deck of playing cards (using merge sort) O(N^2) - checking if you have everything on your shopping list in your trolley O(infinity) - tossing a coin until it lands on heads Any ideas? O