Why doesn't a compiler optimize floating-point *2 into an exponent increment?

前端 未结 9 1916
悲&欢浪女
悲&欢浪女 2021-02-06 20:45

I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int and a float. F

9条回答
  •  没有蜡笔的小新
    2021-02-06 21:41

    For example, 2 * f, might simply increment the exponent of f by 1, saving some cycles.

    This simply isn't true.

    First you have too many corner cases such as zero, infinity, Nan, and denormals. Then you have the performance issue.

    The misunderstanding is that incrementing the exponent is not faster than doing a multiplication.

    If you look at the hardware instructions, there is no direct way to increment the exponent. So what you need to do instead is:

    1. Bitwise convert into integer.
    2. Increment the exponent.
    3. Bitwise convert back to floating-point.

    There is generally a medium to large latency for moving data between the integer and floating-point execution units. So in the end, this "optimization" becomes much worse than a simple floating-point multiply.

    So the reason why the compiler doesn't do this "optimization" is because it isn't any faster.

提交回复
热议问题