This is an example to illustrate my question which involves some much more complicated code that I can\'t post here.
#include
int main()
{
An aggressively optimising C or C++ compiler targeting a 16 bit int will know that the behaviour on adding 1000000000 to an int type is undefined.
It is permitted by either standard to do anything it wants which could include the deletion of the entire program, leaving int main(){}.
But what about larger ints? I don't know of a compiler that does this yet (and I'm not an expert in C and C++ compiler design by any means), but I imagine that sometime a compiler targeting a 32 bit int or higher will figure out that the loop is infinite (i doesn't change) and so a will eventually overflow. So once again, it can optimise the output to int main(){}. The point I'm trying to make here is that as compiler optimisations become progressively more aggressive, more and more undefined behaviour constructs are manifesting themselves in unexpected ways.
The fact that your loop is infinite is not in itself undefined since you are writing to standard output in the loop body.