问题
Would anyone be able to explain why int and const int give different results when cast to float and used in floating point math? See for example this piece of code:
int _tmain(int argc, _TCHAR* argv[])
{
int x = 1000;
const int y = 1000;
float fx = (float) x;
float fy = (float) y;
printf("(int = 1000) * 0.3f = %4.10f \n", 0.3f*x);
printf("(const int = 1000) * 0.3f = %4.10f \n", 0.3f*y);
printf("(float)(int = 1000) * 0.3f = %4.10f \n", 0.3f*fx);
printf("(float)(const int = 1000) * 0.3f = %4.10f \n", 0.3f*fy);
return 0;
}
The result is:
(int = 1000) * 0.3f = 300.0000119209
(const int = 1000) * 0.3f = 300.0000000000
(float)(int = 1000) * 0.3f = 300.0000119209
(float)(const int = 1000) * 0.3f = 300.0000119209
My guess is that in the first case 0.3f*(int) is implicitly cast to a float, whereas in the second case 0.3f*(const int) is implicitly cast to a double. Is this correct, and if so why does this happen? Also, what is the "right" approach?
Many thanks
回答1:
The multiplication of two constants can be performed by the compiler before the code is even generated. The rest must be done at run-time.
来源:https://stackoverflow.com/questions/17066385/different-results-when-casting-int-and-const-int-to-float