Hotspot JIT optimizations

柔情痞子 提交于 2019-12-08 22:49:21

问题


In a lecture about JIT in Hotspot I want to give as many examples as possible of the specific optimizations that JIT performs.

I know just about "method inlining", but there should be much more. Give a vote for every example.


回答1:


Well, you should scan Brian Goetz's articles for examples.

In brief, HotSpot can and will:

  1. Inline methods
  2. Join adjacent synchronized blocks on the same object
  3. Eliminate locks if monitor is not reachable from other threads
  4. Eliminate dead code (hence most of micro-benchmarks are senseless)
  5. Drop memory write for non-volatile variables
  6. Replace interface calls with direct method calls for methods only implemented once

et cetera




回答2:


There is a great presentation on the optimizations used by modern JVMs on the Jikes RVM site: ACACES’06 - Dynamic Compilation and Adaptive Optimization in Virtual Machines

It discusses architecture, tradeoffs, measurements and techniques. And names at least 20 things JVMs do to optimize the machine code.




回答3:


I think the interesting stuff are those things that a conventional compiler can't do contrary to the JIT. Inlining methods, eliminating dead code, CSE, live analysis, etc. are all done by your average c++ compiler as well, nothing "special" here

But optimizing something based on optimistic assumptions and then deoptimizing later if they turn out to be wrong? (assuming a specific type, removing branches that will fail later anyhow if not done,..) Removing virtual calls if we can guarantee that there exists only one class at the moment (again something that only reliably works with deoptimization)? Adaptive optimization is I think the one thing that really distinguishes the JIT from your run of the mill c++ compiler.

Maybe also mention the runtime profiling done by the JIT to analyse which optimizations it should apply (not that unique anymore with all the profile-guided optimizations though).




回答4:


There's an old but likely still valid overview in this article.

The highlights seem to be performing classical optimizations based on available runtime profiling information:

  • JITting "hot spots" into native code
  • Adaptive inlining – inlining the most commonly called implementations for a given method dispatch to avoid a huge code size

And some minor ones like generational GC which makes allocating short lived objects cheaper, and various other smaller optimizations, plus whatever else was added since that article was published.

There's also a more detailed official whitepaper, and a fairly nitty-gritty HotSpot Internals wiki page that lists how to write fast Java code that should let you extrapolate what use cases were optimized.




回答5:


Jumps to equivalent native machine code instead of JVM interpretation of the op-codes. The lack of a need to simulate a machine (the JVM) in machine code for a heavily used part of a Java application (which is the equivalent of an extension of the JVM) provides a good speed increase.

Of course, that's most of what HotSpot is.



来源:https://stackoverflow.com/questions/7854808/hotspot-jit-optimizations

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!