One of the questions that I asked some time ago had undefined behavior, so compiler optimization was actually causing the program to break.
But if there is no undefi
In case 2, imagine some OS code that deliberately changes pointer types. The optimizer can assume that objects of the wrong type could not be referenced and generate code that aliases changing memory values in registers and gets the "wrong"1 answer.
Case 3 is an interesting concern. Sometimes optimizers make code smaller but sometimes they make it bigger. Most programs are not the least bit CPU-bound and even for the ones that are, only 10% or less of the code is actually computationally-intensive. If there is any downside at all to the optimizer then it is only a win for less than 10% of a program.
If the generated code is larger, then it will be less cache-friendly. This might be worth it for a matrix algebra library with O(n3) algorithms in tiny little loops. But for something with more typical time complexity, overflowing the cache might actually make the program slower. Optimizers can be tuned for all this stuff, typically, but if the program is a web application, say, it would certainly be more developer-friendly if the compiler would just do the all-purpose things and allow the developer to just not open the fancy-tricks Pandora's box.
1. Such programs are usually not standard-conforming so the optimizer is technically "correct", but still not doing what the developer intended.