Why compile-time floating point calculations might not have the same results as run-time calculations?

安稳与你 提交于 2021-02-07 06:45:06

问题


In constexpr: Introduction, the speaker mentioned "Compile-time floating point calculations might not have the same results as runtime calculations":

And the reason is related to "cross-compiling".

Honestly, I can't get the idea clearly. IMHO, different platforms may also have different implementation of integers.

Why does it only affect floating points? Or I miss something?


回答1:


You're absolutely right that, at some level, the problem of calculating floating-point values at compile time is the same as that of calculating integer values. The difference is in the complexity of the task. It's fairly easy to emulate 24-bit integer math on a system that has 16-bit registers; for serious programmers, that's a finger exercise. It's much harder to do floating-point math if you don't have a native implementation. The decision to not require floating-point constexpr is based in part on that difference: it would be really expensive to require cross-compilers to emulate floating-point math for their target platform at compile time.

Another factor in this is that some details of floating-point calculations can be set at runtime. Rounding is one; handling of overflows and underflows is another. There's simply no way that a compiler can know the full context for the runtime evaluation of a floating-point calculation, so calculating the result at compile-time can't be done reliably.




回答2:


Why does it only affect floating points?

For the standard doesn't impose restrictions on floating-point operation accuracy.

As per expr.const, emphasis mine:

[ Note: Since this document imposes no restrictions on the accuracy of floating-point operations, it is unspecified whether the evaluation of a floating-point expression during translation yields the same result as the evaluation of the same expression (or the same operations on the same values) during program execution. [ Example:

bool f() {
    char array[1 + int(1 + 0.2 - 0.1 - 0.1)];  // Must be evaluated during translation
    int size = 1 + int(1 + 0.2 - 0.1 - 0.1);   // May be evaluated at runtime
    return sizeof(array) == size;
}

It is unspecified whether the value of f() will be true or false. — end example ]
— end note ]




回答3:


Why does it only affect floating points?

Some operations on integers are invalid and undefined:

  • divide by zero: mathematical operation not defined for the operand
  • overflow: mathematical value not representable for the given type

[The compiler will detect such cases on compile time values. At runtime, the behavior is not defined by the standard and can be anything, from throwing a signal, modulo operation, or "random" behavior if the compiler assumed that operations are valid.]

Operations on integers that are valid are completely specified mathematically.

Division of integers in C/C++ (and most programming languages) is an exact operation as it's Euclidean division, not an operation trying to find a close approximation of division of rationals: 5/3 is 1, infinite decimal representation of 5/3 is 1.66... or approximately 1.66666667; closest integer is 2.

The aim of fp is to provide the best approximation of the mathematical operation on "real numbers" (actually rational numbers for the four operations, floats are rational by definition). These operations are rounded according to the curent rounding mode, set with std::fesetround. So the fp operations are state dependent, the result isn't a function of only the operands. (See std::fegetround, std::fesetround.)

There is no such "state" at compile time, so compile time fp operations cannot be consistent with run time operations, by definition.



来源:https://stackoverflow.com/questions/50959021/why-compile-time-floating-point-calculations-might-not-have-the-same-results-as

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!