Please note this question related to performance only. Lets skip design guidelines, philosophy, compatibility, portability and anything what is not related to pure performan
The only explanation could be is that the CLR does additional optimisation (correrct me if I am wrong here).
Yes, it is called inlining. It is done in the compiler (machine code level - i.e. JIT). As the getter/setter are trivial (i.e. very simple code) the method calls are destroyed and the getter/setter written in the surrounding code.
This does not happen in debug mode in order to support debugging (i.e. the ability to set a breakpoint in a getter or setter).
In visual studio there is no way to do that in the debugger. Compile release, run without attached debugger and you will get the full optimization.
I do not believe that in real application where those properties being used in much more sophisticated way they will be optimised in the same way.
The world is full of illusions that are wrong. They will be optimized as they are still trivial (i.e. simple code, so they are inlined).