Determine optimization level in preprocessor?

后端 未结 2 1332
甜味超标
甜味超标 2020-12-17 09:10

-Og is a relatively new optimization option that is intended to improve the debugging experience while apply optimizations. If a user selects -Og,

相关标签:
2条回答
  • 2020-12-17 09:23

    I don't know if this is clever hack, but it is a hack.

    $ gcc -Xpreprocessor -dM -E - < /dev/null > 1
    $ gcc -Xpreprocessor -dM -O -E - < /dev/null > 2
    $ diff 1 2
    53a54
    > #define __OPTIMIZE__ 1
    68a70
    > #define _FORTIFY_SOURCE 2
    154d155
    < #define __NO_INLINE__ 1
    

    clang didn't produce the FORTIFY one.

    0 讨论(0)
  • 2020-12-17 09:50

    I believe this is not possible to know directly the optimization level used to compile the software as this is not in the list of defined preprocessor symbols

    You could rely on -DNDEBUG (no debug) which is used to disable assertions in release code and enable your "debug" code path in this case.

    However, I believe a better thing to do is having a system wide set of symbols local to your project and let the user choose what to use explicitly.:

    • MYPROJECT_DNDEBUG
    • MYPROJECT_OPTIMIZE
    • MYPROJECT_OPTIMIZE_AGGRESSIVELY

    This makes debugging or the differences of behavior between release/debug much easier as you can incrementally turn on/off the different behaviors.

    0 讨论(0)
提交回复
热议问题