Is there such a thing as “negative” big-O complexity? [duplicate]

南楼画角 提交于 2019-11-29 10:07:31

No that is not possible. Since Big-Oh is suppose to be an approximation of the number of operations an algorithm performs related to its domain size then it would not make sense to describe an algorithm as using a negative number of operations.

The formal definition section of the wikipedia article actually defines the Big-Oh notation in terms of using positive real numbers. So there actually is not even a proof because the whole concept of Big-Oh has no meaning on the negative real numbers per the formal definition.

Short answer: Its not possible because the definition says so.

Nikita Rybak

update Just to make it clear, I'm answering this part of the question: Are there any known algorithms or problems which actually get easier or faster to solve with larger input?

As noted in accepted answer here, there are no algorithms working faster with bigger input. Are there any O(1/n) algorithms? Even an algorithm like sleep(1/n) has to spend time reading its input, so its running time has a lower bound.

In particular, author referes relatively simple substring search algorithm:
http://en.wikipedia.org/wiki/Horspool

PS But using term 'negative complexity' for such algorithms doesn't seem to be reasonable to me.

To think in an algorithm that executes in negative time, is the same as thinking about time going backwards.

If the program starts executing at 10:30 AM and stops at 10:00 AM without passing through 11:00 AM, it has just executed with time = O(-1).

=]

Now, for the mathematical part:

If you can't come up with a sequence of actions that execute backwards in time (you never know...lol), the proof is quite simple:

positiveTime = O(-1) means:

positiveTime <= c * -1, for any C > 0 and n > n0 > 0

Consider the "C > 0" restriction. We can't find a positive number that multiplied by -1 will result in another positive number. By taking that in account, this is the result:

positiveTime <= negativeNumber, for any n > n0 > 0

Wich just proves that you can't have an algorithm with O(-1).

Not really. O(1) is the best you can hope for.

The closest I can think of is language translation, which uses large datasets of phrases in the target language to match up smaller snippets from the source language. The larger the dataset, the better (and to a certain extent faster) the translation. But that's still not even O(1).

Well, for many calculations like "given input A return f(A)" you can "cache" calculation results (store them in array or map), which will make calculation faster with larger number of values, IF some of those values repeat.

But I don't think it qualifies as "negative complexity". In this case fastest performance will probably count as O(1), worst case performance will be O(N), and average performance will be somewhere inbetween.

This is somewhat applicable for sorting algorithms - some of them have O(N) best-case scenario complexity and O(N^2) worst case complexity, depending on the state of data to be sorted.

I think that to have negative complexity, algorithm should return result before it has been asked to calculate result. I.e. it should be connected to a time machine and should be able to deal with corresponding "grandfather paradox".

As with the other question about the empty algorithm, this question is a matter of definition rather than a matter of what is possible or impossible. It is certainly possible to think of a cost model for which an algorithm takes O(1/n) time. (That is not negative of course, but rather decreasing with larger input.) The algorithm can do something like sleep(1/n) as one of the other answers suggested. It is true that the cost model breaks down as n is sent to infinity, but n never is sent to infinity; every cost model breaks down eventually anyway. Saying that sleep(1/n) takes O(1/n) time could be very reasonable for an input size ranging from 1 byte to 1 gigabyte. That's a very wide range for any time complexity formula to be applicable.

On the other hand, the simplest, most standard definition of time complexity uses unit time steps. It is impossible for a positive, integer-valued function to have decreasing asymptotics; the smallest it can be is O(1).

I don't know if this quite fits but it reminds me of bittorrent. The more people downloading a file, the faster it goes for all of them

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!