complexity-theory

Total number of possible triangles from n numbers

吃可爱长大的小学妹 提交于 2019-11-28 03:59:39
If n numbers are given, how would I find the total number of possible triangles? Is there any method that does this in less than O(n^3) time? I am considering a+b>c , b+c>a and a+c>b conditions for being a triangle. Assume there is no equal numbers in given n and it's allowed to use one number more than once. For example, we given a numbers {1,2,3}, so we can create 7 triangles: 1 1 1 1 2 2 1 3 3 2 2 2 2 2 3 2 3 3 3 3 3 If any of those assumptions isn't true, it's easy to modify algorithm. Here I present algorithm which takes O(n^2) time in worst case: Sort numbers (ascending order). We will

how does IF affect complexity?

寵の児 提交于 2019-11-28 03:50:45
问题 Let's say we have an array of 1.000.000 elements and we go through all of them to check something simple, for example if the first character is "A". From my (very little) understanding, the complexity will be O(n) and it will take some X amount of time. If I add another IF (not else if) to check, let's say, if the last character is "G", how will it change complexity? Will it double the complexity and time? Like O(2n) and 2X ? I would like to avoid taking into consideration the number of

Linear time v.s. Quadratic time

冷暖自知 提交于 2019-11-28 03:50:44
Often, some of the answers mention that a given solution is linear , or that another one is quadratic . How to make the difference / identify what is what? Can someone explain this, the easiest possible way, for the ones like me who still don't know? Jblasco A method is linear when the time it takes increases linearly with the number of elements involved. For example, a for loop which prints the elements of an array is roughly linear: for x in range(10): print x because if we print range(100) instead of range(10), the time it will take to run it is 10 times longer. You will see very often that

Is there such a thing as “negative” big-O complexity? [duplicate]

两盒软妹~` 提交于 2019-11-28 03:32:39
问题 Possible Duplicate: Are there any O(1/n) algorithms? This just popped in my head for no particular reason, and I suppose it's a strange question. Are there any known algorithms or problems which actually get easier or faster to solve with larger input? I'm guessing that if there are, it wouldn't be for things like mutations or sorting, it would be for decision problems. Perhaps there's some problem where having a ton of input makes it easy to decide something, but I can't imagine what. If

What is O(log* N)?

浪子不回头ぞ 提交于 2019-11-28 03:18:24
What is O(log* N) ? I know big-Oh, the log* is unknown. O( log* N ) is " iterated logarithm ": In computer science, the iterated logarithm of n, written log* n (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1. The log* N bit is an iterated algorithm which grows very slowly, much slower than just log N . You basically just keep iteratively 'logging' the answer until it gets below one (E.g: log(log(log(...log(N))) ), and the number of times you had to log() is the answer. Anyway, this is a five-year

Intuitive explanation for why QuickSort is n log n?

大城市里の小女人 提交于 2019-11-28 03:11:56
Is anybody able to give a 'plain english' intuitive, yet formal, explanation of what makes QuickSort n log n? From my understanding it has to make a pass over n items, and it does this log n times...Im not sure how to put it into words why it does this log n times. Each partitioning operation takes O(n) operations (one pass on the array). In average, each partitioning divides the array to two parts (which sums up to log n operations). In total we have O(n * log n) operations. I.e. in average log n partitioning operations and each partitioning takes O(n) operations. Complexity A Quicksort

What does “constant” complexity really mean? Time? Count of copies/moves? [closed]

天涯浪子 提交于 2019-11-28 03:03:51
问题 I can think of three operations in C++ that can be described in some sense as having 'constant' complexity. I've seen some debate(*) over what this means, and it seems to me that we could just say "all these operations are constant, but some are more constant than others" :-) ( Edit 2 : If you already think you know the answer, please read some of the debate at this question before rushing in too soon: What data structure, exactly, are deques in C++? Many people, with quite high scores, are

O(N log N) Complexity - Similar to linear?

与世无争的帅哥 提交于 2019-11-28 02:58:48
So I think I'm going to get buried for asking such a trivial question but I'm a little confused about something. I have implemented quicksort in Java and C and I was doing some basic comparissons. The graph came out as two straight lines, with the C being 4ms faster than the Java counterpart over 100,000 random integers. The code for my tests can be found here; android-benchmarks I wasn't sure what an (n log n) line would look like but I didn't think it would be straight. I just wanted to check that this is the expected result and that I shouldn't try to find an error in my code. I stuck the

Why is the knapsack problem pseudo-polynomial?

走远了吗. 提交于 2019-11-28 02:52:35
I know that Knapsack is NP-complete while it can be solved by DP. They say that the DP solution is pseudo-polynomial , since it is exponential in the "length of input" (i.e. the numbers of bits required to encode the input). Unfortunately I did not get it. Can anybody explain that pseudo-polynomial thing to me slowly ? marcog The running time is O(NW) for an unbounded knapsack problem with N items and knapsack of size W. W is not polynomial in the length of the input though, which is what makes it pseudo -polynomial. Consider W = 1,000,000,000,000. It only takes 40 bits to represent this

How to understand the knapsack problem is NP-complete?

纵饮孤独 提交于 2019-11-28 02:48:59
We know that the knapsack problem can be solved in O(nW) complexity by dynamic programming. But we say this is a NP-complete problem. I feel it is hard to understand here. (n is the number of items. W is the maximum volume.) Giuseppe Cardone O(n*W) looks like a polynomial time, but it is not , it is pseudo-polynomial . Time complexity measures the time that an algorithm takes as a function of the length in bits of its input. The dynamic programming solution is indeed linear in the value of W , but exponential in the length of W — and that's what matters! More precisely, the time complexity of