What's the reason for letting the semantics of a=a++ be undefined?

后端 未结 8 1540
后悔当初
后悔当初 2020-12-08 07:18
a = a++;

is undefined behaviour in C. The question I am asking is : why?

I mean, I get that it might be hard to provide a

8条回答
  •  不知归路
    2020-12-08 07:59

    Suppose a is a pointer with value 0x0001ffff. And suppose the architecture is segmented so that the compiler needs to apply the increment to the high and low parts separately, with a carry between them. The optimiser could conceivably reorder the writes so that the final value stored is 0x0002ffff; that is,the low part before the increment and the high part after the increment.

    This value is twice either value that you might have expected. It may point to memory not owned by the application, or it may (in general) be a trapping representation. In other words, the CPU may raise a hardware fault as soon as this value is loaded into a register, crashing the app. Even if it doesn't cause an immediate crash, it is a profoundly wrong value for the app to be using.

    The same kind of thing can happen with other basic types, and the C language allows even ints to have trapping representations. C tries to allow efficient implementation on a wide range of hardware. Getting efficient code on a segmented machine such as the 8086 is hard. By making this undefined behaviour, a language implementor has a bit more freedom to optimise aggressively. I don't know if it has ever made a performance difference in practice, but evidently the language committee wanted to give every benefit to the optimiser.

提交回复
热议问题