considering this example:
public static void main(final String[] args) {
final List myList = Arrays.asList(\"A\", \"B\", \"C\", \"D\");
Note that the javac
compiler has about nothing to do with optimization. The "important" compiler is the JIT compiler which lives within the JVM.
In your example, in the most generic case, the myList.size()
is a simple method dispatch, which returns the contents of a field in the List
instance. This is negligible work compared to what is implied by System.out.println("Hello")
(at least one system call, hence hundreds of clock cycles, compared to no more than a dozen for the method dispatch). I very much doubt that your code could exhibit a meaningful difference in speed.
On a more general basis, the JIT compiler should recognize this call to size()
as a call to a known instance, so that it may perform the method dispatch with a direct function call (which is faster), or even inline the size()
method call, reducing the call to a simple instance field access.
If you want to test something like this, you really must optimize your microbenchmark to measure what you care about.
First, make the loop inexpensive but impossible to skip. Computing a sum usually does the trick.
Second, compare the two timings.
Here's some code that does both:
import java.util.*;
public class Test {
public static long run1() {
final List<String> myList = Arrays.asList("A", "B", "C", "D");
final long start = System.nanoTime();
int sum = 0;
for (int i = 1000000000; i > myList.size(); i--) sum += i;
final long stop = System.nanoTime();
System.out.println("Finish: " + (stop - start)*1e-9 + " ns/op; sum = " + sum);
return stop-start;
}
public static long run2() {
final List<String> myList = Arrays.asList("A", "B", "C", "D");
final long start = System.nanoTime();
int sum = 0;
int limit = myList.size();
for (int i = 1000000000; i > limit; i--) sum += i;
final long stop = System.nanoTime();
System.out.println("Finish: " + (stop - start)*1e-9 + " ns/op; sum = " + sum);
return stop-start;
}
public static void main(String[] args) {
for (int i=0 ; i<5 ; i++) {
long t1 = run1();
long t2 = run2();
System.out.println(" Speedup = " + (t1-t2)*1e-9 + " ns/op\n");
}
}
}
And if we run it, on my system we get:
Finish: 0.481741256 ns/op; sum = -243309322
Finish: 0.40228402 ns/op; sum = -243309322
Speedup = 0.079457236 ns/op
Finish: 0.450627151 ns/op; sum = -243309322
Finish: 0.43534661700000005 ns/op; sum = -243309322
Speedup = 0.015280534 ns/op
Finish: 0.47738474700000005 ns/op; sum = -243309322
Finish: 0.403698331 ns/op; sum = -243309322
Speedup = 0.073686416 ns/op
Finish: 0.47729349600000004 ns/op; sum = -243309322
Finish: 0.405540508 ns/op; sum = -243309322
Speedup = 0.071752988 ns/op
Finish: 0.478979617 ns/op; sum = -243309322
Finish: 0.36067492700000003 ns/op; sum = -243309322
Speedup = 0.11830469 ns/op
which means that the overhead of the method call is approximately 0.1 ns. If your loop does things that take no more than 1-2 ns, then you should care about this. Otherwise, don't.
It can't optimize it, because mylist.size() could change during loop execution. Even if it's final, this just means that the reference is final (meaning you can't reassign myList to some other object), but methods on myList, such as remove() and add() are still available. Final does not make object immutable.
In cases of "compiler optimization", the best you can do is for-each loops:
for(final String x : myList) { ... }
Which lets the compiler provide the fastest implementation.
Edit:
The difference between your code examples is in the second argument of the for-loop. In the first example, the VM will do a method call (more expensive) and is thus slower (only significant when there are a lot of iterations). In your second example, the VM will do a stack pop (less expensive, and local variables are on the stack), and thus faster (only significant when there are a lot of iterations: for just one iteration, the first one is faster, in terms of memory usage).
Also: "Premature optimization is the root of all evil." Donald Knuth's infamous law.
The difference is one method call less for each iteration, so the second version should run slightly faster. Although if you use Just-In-Time compiler, he may optimize that - figuring out that it doesn't change during the loop. Standard Java implementation features JIT, but not every Java implementation does.
Java compiler would have optimized it so, but didn't do so by seeing the funny condition. If you wrote it like this there would be no issue.
for (int i = myList.size(); i < 1000000; i--) {
System.out.println("Hello");
}