Java benchmarking - why is the second loop faster?

感情迁移 提交于 2019-11-27 03:42:48

This is a classic java benchmarking issue. Hotspot/JIT/etc will compile your code as you use it, so it gets faster during the run.

Run around the loop at least 3000 times (10000 on a server or on 64 bit) first - then do your measurements.

You know there's something wrong, because Bytes.toBytes calls c.getBytes internally:

public static byte[] toBytes(String s) {
    try {
        return s.getBytes(HConstants.UTF8_ENCODING);
    } catch (UnsupportedEncodingException e) {
        LOG.error("UTF-8 not supported?", e);
        return null;
    }
}

The source is taken from here. This tells you that the call cannot possibly be faster than the direct call - at the very best (i.e. if it gets inlined) it would have the same timing. Generally, though, you'd expect it to be a little slower, because of the small overhead in calling a function.

This is the classic problem with micro-benchmarking in interpreted, garbage-collected environments with components that run at arbitrary time, such as garbage collectors. On top of that, there are hardware optimizations, such as caching, that skew the picture. As the result, the best way to see what is going on is often to look at the source.

The "second" loop is faster, so,

When you execute a method at least 10000 times, it triggers the whole method to be compiled. This means that your second loop can be

  • faster as it is already compiled the first time you run it.
  • slower because when optimised it doesn't have good information/counters on how the code is executed.

The best solution is to place each loop in a separate method so one loop doesn't optimise the other AND run this a few times, ignoring the first run.

e.g.

for(int i = 0; i < 3; i++) {
    long time1 = doTest1();  // timed using System.nanoTime();
    long time2 = doTest2();
    System.out.printf("Test1 took %,d on average, Test2 took %,d on average%n",
        time1/RUNS, time2/RUNS);
}

Most likely, the code was still compiling or not yet compiled at the time the first loop ran.

Wrap the entire method in an outer loop so you can run the benchmarks a few times, and you should see more stable results.

Read: Dynamic compilation and performance measurement.

It simply might be the case that you allocate so much space for objects with your calls to getBytes(), that the JVM Garbage Collector starts and cleans up the unused references (bringing out the trash).

Few more observations

  • As pointed by @dasblinkenlight above, Hadoop's Bytes.toBytes(c); internally calls the String.getBytes("UTF-8")

  • The variant method String.getBytes() which takes Character Set as input is faster than the one that does not take any character set. So for a given string, getBytes("UTF-8") would be faster than getBytes(). I have tested this on my machine (Windows8, JDK 7). Run the two loops one with getBytes("UTF-8") and other with getBytes() in sequence in equal iterations.

        long ts;
        String c = "sgfrt34tdfg34";
    
        ts = System.currentTimeMillis();
        for (int k = 0; k < 10000000; k++) {
            c.getBytes("UTF-8");
        }
        System.out.println("t1->" + (System.currentTimeMillis() - ts));
    
        ts = System.currentTimeMillis();
        for (int i = 0; i < 10000000; i++) { 
            c.getBytes();
        }
        System.out.println("t2->" + (System.currentTimeMillis() - ts));
    

this gives:

t1->1970
t2->2541

and the results are same even if you change order of executions of loop. To discount any JIT optimizations, I would suggest run the tests in separate methods to confirm this (as suggested by @Peter Lawrey above)

  • So, Bytes.toBytes(c) should always be faster than String.getBytes()
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!