Two reasons:
- You can't programmatically identify places where it would be strictly higher performing.
- The "optimization" will slow things down if performed incorrectly.
You can suggest people use the correct calls for their application, but at some point it's the developer's responsibility to get it right.
Edit: Regarding the cutoff, we have another couple of problems:
- The only way to know for sure that the cutoff is reached is complicated flow analysis. The number of places where this would be able to find sections that could be converted is extremely small.
- Flow analysis is expensive. If you do it at runtime, the whole program will run slower for the rare chance that one piece of poorly written code will be faster. If you do it at compile time, it's not an error according to language syntax but you can issue a warning - and that's exactly what FXCop does (a slow but available flow analysis tool). Just think if FXCop always had to run with the compiler; so many hours people would be just waiting to run code. And if it was at runtime, well welcome to JVM startup times...