Normally, Java optimizes the virtual calls based on the number of implementations encountered on a given call side. This can be easily seen in the results of my benchmark, w
hashCode is defined in java.lang.Object, so defining it in your own class doesn't do much at all. (still it's a defined method but it makes no difference)
JIT has several ways to optimize call sites (in this case hashCode()):
The virtual calls are not inlined and require an indirection through the table of virtual methods and virtually ensured cache miss. The lack of inlining actually requires full function stubs with parameters passed through the stack. Overall when the real performance killer is the inability to inline and apply optimizations.
Please note: calling hashCode() of any class extending Base is the same as calling Object.hashCode() and this is how it compiles in the bytecode, if you add an explicit hashCode in Base that would limit the potential call targets invoking Base.hashCode().
Way too many classes (in JDK itself) have hashCode() overridden so in cases on not inlined HashMap alike structures the invocation is performed via vtable - i.e. slow.
As extra bonus: While loading new classes the JIT has to deoptimize existing call sites.
I may try to look up some sources, if anyone is interested in further reading