I was following a previous post on this that says:
For LinkedList
- get is O(n)
- add is O(1)
- remove is O(n)
To understand why the results you got do not contradict the "big O" characterization. We need to go back to first principles; i.e. the definition.
Let f(x) and g(x) be two functions defined on some subset of the real numbers. One writes
f(x) = O(g(x)) as x -> infinityif and only if, for sufficiently large values of x, f(x) is at most a constant multiplied by g(x) in absolute value. That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M |g(x)| for all x > x_0.In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that f(x) = O(g(x)).
So, the statement add1 is O(1), means is that the time cost of an add1 operation on a list of size N tends towards a constant Cadd1 as N tends to infinity.
And the statement add2 is O(1) amortized over N operations, means is that the average time cost of one of a sequence of N add2 operations tends towards a constant Cadd2 as N tends to infinity.
What is does not say is what those constants Cadd1 and Cadd2 are. In fact the reason that LinkedList is slower than ArrayList in your benchmark is that Cadd1 is larger than Cadd2.
The lesson is that big O notation does not predict absolute or even relative performance. All it predicts is the shape of the performance function as the controlling variable gets very large. This is useful to know, but it doesn't tell you everything you need to know.