In C99, is f()+g() undefined or merely unspecified?

核能气质少年 提交于 2019-12-17 07:23:09

问题


I used to think that in C99, even if the side-effects of functions f and g interfered, and although the expression f() + g() does not contain a sequence point, f and g would contain some, so the behavior would be unspecified: either f() would be called before g(), or g() before f().

I am no longer so sure. What if the compiler inlines the functions (which the compiler may decide to do even if the functions are not declared inline) and then reorders instructions? May one get a result different of the above two? In other words, is this undefined behavior?

This is not because I intend to write this kind of thing, this is to choose the best label for such a statement in a static analyzer.


回答1:


The expression f() + g() contains a minimum of 4 sequence points; one before the call to f() (after all zero of its arguments are evaluated); one before the call to g() (after all zero of its arguments are evaluated); one as the call to f() returns; and one as the call to g() returns. Further, the two sequence points associated with f() occur either both before or both after the two sequence points associated with g(). What you cannot tell is which order the sequence points will occur in - whether the f-points occur before the g-points or vice versa.

Even if the compiler inlined the code, it has to obey the 'as if' rule - the code must behave the same as if the functions were not interleaved. That limits the scope for damage (assuming a non-buggy compiler).

So the sequence in which f() and g() are evaluated is unspecified. But everything else is pretty clean.


In a comment, supercat asks:

I would expect function calls in the source code remain as sequence points even if a compiler decides on its own to inline them. Does that remain true of functions declared "inline", or does the compiler get extra latitude?

I believe the 'as if' rule applies and the compiler doesn't get extra latitude to omit sequence points because it uses an explicitly inline function. The main reason for thinking that (being too lazy to look for the exact wording in the standard) is that the compiler is allowed to inline or not inline a function according to its rules, but the behaviour of the program should not change (except for performance).

Also, what can be said about the sequencing of (a(),b()) + (c(),d())? Is it possible for c() and/or d() to execute between a() and b(), or for a() or b() to execute between c() and d()?

  • Clearly, a executes before b, and c executes before d. I believe it is possible for c and d to be executed between a and b, though it is fairly unlikely that it the compiler would generate the code like that; similarly, a and b could be executed between c and d. And although I used 'and' in 'c and d', that could be an 'or' - that is, any of these sequences of operation meet the constraints:

    • Definitely allowed
    • abcd
    • cdab
    • Possibly allowed (preserves a ≺ b, c ≺ d ordering)
    • acbd
    • acdb
    • cadb
    • cabd

     
    I believe that covers all possible sequences. See also the chat between Jonathan Leffler and AnArrayOfFunctions — the gist is that AnArrayOfFunctions does not think the 'possibly allowed' sequences are allowed at all.

If such a thing would be possible, that would imply a significant difference between inline functions and macros.

There are significant differences between inline functions and macros, but I don't think the ordering in the expression is one of them. That is, any of the functions a, b, c or d could be replaced with a macro, and the same sequencing of the macro bodies could occur. The primary difference, it seems to me, is that with the inline functions, there are guaranteed sequence points at the function calls - as outlined in the main answer - as well as at the comma operators. With macros, you lose the function-related sequence points. (So, maybe that is a significant difference...) However, in so many ways the issue is rather like questions about how many angels can dance on the head of a pin - it isn't very important in practice. If someone presented me with the expression (a(),b()) + (c(),d()) in a code review, I would tell them to rewrite the code to make it clear:

a();
c();
x = b() + d();

And that assumes there is no critical sequencing requirement on b() vs d().




回答2:


See Annex C for a list of sequence points. Function calls (the point between all arguments being evaluated and execution passing to the function) are sequence points. As you've said, it's unspecified which function gets called first, but each of the two functions will either see all the side effects of the other, or none at all.




回答3:


@dmckee

Well, that won't fit inside a comment, but here is the thing:

First, you write a correct static analyzer. "Correct", in this context, means that it won't remain silent if there is anything dubious about the analyzed code, so at this stage you merrily conflate undefined and unspecified behaviors. They are both bad and unacceptable in critical code, and you warn, rightly, for both of them.

But you only want to warn once for one possible bug, and also you know that your analyzer will be judged in benchmarks in terms of "precision" and "recall" when compared to other, possibly not correct, analyzers, so you mustn't warn twice about one same problem... Be it a true or false alarm (you don't know which. you never know which, otherwise it would be too easy).

So you want to emit a single warning for

*p = x;
y = *p;

Because as soon as p is a valid pointer at the first statement, it can be assumed to be a valid pointer at the second statement. And not inferring this will lower your score on the precision metric.

So you teach your analyzer to assume that p is a valid pointer as soon as you have warned about it the first time in the above code, so that you don't warn about it the second time. More generally, you learn to ignore values (and execution paths) that correspond to something you have already warned about.

Then, you realize that not many people are writing critical code, so you make other, lightweight analyses for the rest of them, based on the results of the initial, correct analysis. Say, a C program slicer.

And you tell "them": You don't have to check about all the (possibly, often false) alarms emitted by the first analysis. The sliced program behaves the same as the original program as long as none of them is triggered. The slicer produces programs that are equivalent for the slicing criterion for "defined" execution paths.

And users merrily ignore the alarms and use the slicer.

And then you realize that perhaps there is a misunderstanding. For instance, most implementations of memmove (you know, the one that handles overlapping blocks) actually invoke unspecified behavior when called with pointers that do not point to the same block (comparing addresses that do not point to the same block). And your analyzer ignore both execution paths, because both are unspecified, but in reality both execution paths are equivalent and all is well.

So there shouldn't be any misunderstanding on the meaning of alarms, and if one intends to ignore them, only unmistakable undefined behaviors should be excluded.

And this is how you end up with a strong interest in distinguishing between unspecified behavior and undefined behavior. No-one can blame you for ignoring the latter. But programmers will write the former without even thinking about it, and when you say that your slicer excludes "wrong behaviors" of the program, they will not feel as they are concerned.

And this is the end of a story that definitely did not fit in a comment. Apologies to anyone who read that far.



来源:https://stackoverflow.com/questions/3951017/in-c99-is-fg-undefined-or-merely-unspecified

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!