Time measuring overhead in Java

烂漫一生 提交于 2019-12-17 17:37:15

问题


When measuring elapsed time on a low level, I have the choice of using any of these:

System.currentTimeMillis();
System.nanoTime();

Both methods are implemented native. Before digging into any C code, does anyone know if there is any substantial overhead calling one or the other? I mean, if I don't really care about the extra precision, which one would be expected to be less CPU time consuming?

N.B: I'm using the standard Java 1.6 JDK, but the question may be valid for any JRE...


回答1:


The answer marked correct on this page is actually not correct. That is not a valid way to write a benchmark because of JVM dead code elimination (DCE), on-stack replacement (OSR), loop unrolling, etc. Only a framework like Oracle's JMH micro-benchmarking framework can measure something like that properly. Read this post if you have any doubts about the validity of such micro benchmarks.

Here is a JMH benchmark for System.currentTimeMillis() vs System.nanoTime():

@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@State(Scope.Benchmark)
public class NanoBench {
   @Benchmark
   public long currentTimeMillis() {
      return System.currentTimeMillis();
   }

   @Benchmark
   public long nanoTime() {
    return System.nanoTime();
   }
}

And here are the results (on an Intel Core i5):

Benchmark                            Mode  Samples      Mean   Mean err    Units
c.z.h.b.NanoBench.currentTimeMillis  avgt       16   122.976      1.748    ns/op
c.z.h.b.NanoBench.nanoTime           avgt       16   117.948      3.075    ns/op

Which shows that System.nanoTime() is slightly faster at ~118ns per invocation compared to ~123ns. However, it is also clear that once the mean error is taken into account, there is very little difference between the two. The results are also likely to vary by operating system. But the general takeaway should be that they are essentially equivalent in terms of overhead.

UPDATE 2015/08/25: While this answer is closer to correct that most, using JMH to measure, it is still not correct. Measuring something like System.nanoTime() itself is a special kind of twisted benchmarking. The answer and definitive article is here.




回答2:


I don't believe you need to worry about the overhead of either. It's so minimal it's barely measurable itself. Here's a quick micro-benchmark of both:

for (int j = 0; j < 5; j++) {
    long time = System.nanoTime();
    for (int i = 0; i < 1000000; i++) {
        long x = System.currentTimeMillis();
    }
    System.out.println((System.nanoTime() - time) + "ns per million");

    time = System.nanoTime();
    for (int i = 0; i < 1000000; i++) {
        long x = System.nanoTime();
    }
    System.out.println((System.nanoTime() - time) + "ns per million");

    System.out.println();
}

And the last result:

14297079ns per million
29206842ns per million

It does appear that System.currentTimeMillis() is twice as fast as System.nanoTime(). However 29ns is going to be much shorter than anything else you'd be measuring anyhow. I'd go for System.nanoTime() for precision and accuracy since it's not associated with clocks.




回答3:


You should only ever use System.nanoTime() for measuring how long it takes something to run. It's not just a matter of the nanosecond precision, System.currentTimeMillis() is "wall clock time" while System.nanoTime() is intended for timing things and doesn't have the "real world time" quirks that the other does. From the Javadoc of System.nanoTime():

This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.




回答4:


If you have time, watch this talk by Cliff Click, he speaks about price of System.currentTimeMillis as well as other things.




回答5:


System.currentTimeMillis() is usually really fast (afaik 5-6 cpu cycles but i don't know where i have read this any more), but it's resolution varies on different plattforms.

So if you need high precision go for nanoTime(), if you are worried about overhead go for currentTimeMillis().




回答6:


The accepted answer to this question is indeed incorrect. The alternative answer provided by @brettw is good but nonetheless light on details.

For a full treatment of this subject and the real cost of these calls, please see https://shipilev.net/blog/2014/nanotrusting-nanotime/

To answer the asked question:

does anyone know if there is any substantial overhead calling one or the other?

  • The overhead of calling System#nanoTime is between 15 to 30 nanoseconds per call.
  • The value reported by nanoTime, its resolution, only changes once per 30 nanoseconds

This means depending if you're trying to do million of requests per seconds, calling nanoTime means you're effectively losing a huge chunk of the second calling nanoTime. For such use cases, consider either measuring requests from the client side, thus ensuring you don't fall into coordinated omission, measuring queue depth is also a good indicator.

If you're not trying to cram as much work as you can into a single second, than nanoTime won't really matter but coordinated omission is still a factor.

Finally, for completeness, currentTimeMillis cannot be used no matter what its cost is. This is because it's not guaranteed to move forward between two calls. Especially on a server with NTP, currentTimeMillis is constantly moving around. Not to mention that most things measured by a computer don't take a full millisecond.




回答7:


At a theoretical level, for a VM that uses native threads, and sits on a modern preemptive operating system, the currentTimeMillis can be implemented to be read only once per timeslice. Presumably, nanoTime implementations would not sacrifice the precision.



来源:https://stackoverflow.com/questions/5640409/time-measuring-overhead-in-java

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!