Why not always use compiler optimization?

前端 未结 9 1538
野性不改
野性不改 2020-12-08 02:17

One of the questions that I asked some time ago had undefined behavior, so compiler optimization was actually causing the program to break.

But if there is no undefi

相关标签:
9条回答
  • 2020-12-08 03:07

    If there is no undefined behavior, but there is definite broken behavior (either deterministic normal bugs, or indeterminate like race-conditions), it pays to turn off optimization so you can step through your code with a debugger.

    Typically, when I reach this kind of state, I like to do a combination of:

    1. debug build (no optimizations) and step through the code
    2. sprinkled diagnostic statements to stderr so I can easily trace the run path

    If the bug is more devious, I pull out valgrind and drd, and add unit-tests as needed, both to isolate the problem and ensure that to when the problem is found, the solution works as expected.

    In some extremely rare cases, the debug code works, but the release code fails. When this happens, almost always, the problem is in my code; aggressive optimization in release builds can reveal bugs caused by mis-understood lifetimes of temporaries, etc... ...but even in this kind of situation, having a debug build helps to isolate the issues.

    In short, there are some very good reasons why professional developers build and test both debug (non-optimized) and release (optimized) binaries. IMHO, having both debug and release builds pass unit-tests at all times will save you a lot of debugging time.

    0 讨论(0)
  • 2020-12-08 03:13

    Compiler optimisations have two disadvantages:

    1. Optimisations will almost always rearrange and/or remove code. This will reduce the effectiveness of debuggers, because there will no longer be a 1 to 1 correspondence between your source code and the generated code. Parts of the stack may be missing, and stepping through instructions may end up skipping over parts of the code in counterintuitive ways.
    2. Optimisation is usually expensive to perform, so your code will take longer to compile with optimisations turned on than otherwise. It is difficult to do anything productive while your code is compiling, so obviously shorter compile times are a good thing.

    Some of the optimisations performed by -O3 can result in larger executables. This might not be desirable in some production code.

    Another reason to not use optimisations is that the compiler that you are using may contain bugs that only exist when it is performing optimisation. Compiling without optimisation can avoid those bugs. If your compiler does contain bugs, a better option might be to report/fix those bugs, to change to a better compiler, or to write code that avoids those bugs completely.

    If you want to be able to perform debugging on the released production code, then it might also be a good idea to not optimise the code.

    0 讨论(0)
  • 2020-12-08 03:14

    Two big reasons that I have seen arise from floating point math, and overly aggressive inlining. The former is caused by the fact that floating point math is extremely poorly defined by the C++ standard. Many processors perform calculations using 80-bits of precision, for instance, only dropping down to 64-bits when the value is put back into main memory. If a version of a routine flushes that value to memory frequently, while another only grabs the value once at the end, the results of the calculations can be slightly different. Just tweaking the optimizations for that routine may well be a better move than refactoring the code to be more robust to the differences.

    Inlining can be problematic because, by its very nature, it generally results in larger object files. Perhaps this increase is code size is unacceptable for practical reasons: it needs to fit on a device with limited memory, for instance. Or perhaps the increase in code size results in the code being slower. If it a routine becomes big enough that it no longer fits in cache, the resultant cache misses can quickly outweigh the benefits inlining provided in the first place.

    I frequently hear of people who, when working in a multi-threaded environment, turn off debugging and immediately encounter hordes of new bugs due to newly uncovered race conditions and whatnot. The optimizer just revealed the underlying buggy code here, though, so turning it off in response is probably ill advised.

    0 讨论(0)
提交回复
热议问题