Out of curiosity: how does the CLR dispatch virtual method calls to interface members to the correct implementation?
I know about the VTable that the CLR ma
That article is more than 10 years old, and a lot has changed since then.
IVMaps have now been superseded by Virtual Stub Dispatch.
Virtual stub dispatching (VSD) is the technique of using stubs for virtual method invocations instead of the traditional virtual method table. In the past, interface dispatch required that interfaces had process-unique identifiers, and that every loaded interface was added to a global interface virtual table map.
Go read that article, it has more detail you'll ever need to know. It comes from the Book of the Runtime, which was documentation originally written by the CLR devs for CLR devs but has now been published for everyone. It basically describes the guts of the runtime.
There's no point for me to duplicate the article here, but I'll just state the main points and what they imply:
And here's an important consideration, straight from the article:
When a dispatch stub fails frequently enough, the call site is deemed to be polymorphic and the resolve stub will back patch the call site to point directly to the resolve stub to avoid the overhead of a consistently failing dispatch stub. At sync points (currently the end of a GC), polymorphic sites will be randomly promoted back to monomorphic call sites under the assumption that the polymorphic attribute of a call site is usually temporary. If this assumption is incorrect for any particular call site, it will quickly trigger a backpatch to demote it to polymorphic again.
The runtime is really optimistic about monomorphic call sites, which makes a lot of sense in real code, and it will try hard to avoid resolve stubs as much as possible.