Why #define TRUE (1==1) in a C boolean macro instead of simply as 1?

前端 未结 8 1416
清歌不尽
清歌不尽 2020-12-12 11:55

I\'ve seen definitions in C

#define TRUE (1==1)
#define FALSE (!TRUE)

Is this necessary? What\'s the benefit over simply defining TRUE as

8条回答
  •  一生所求
    2020-12-12 12:35

    The answer is portability. The numeric values of TRUE and FALSE aren't important. What is important is that a statement like if (1 < 2) evaluates to if (TRUE) and a statement like if (1 > 2) evaluates to if (FALSE).

    Granted, in C, (1 < 2) evaluates to 1 and (1 > 2) evaluates to 0, so as others have said, there's no practical difference as far as the compiler is concerned. But by letting the compiler define TRUE and FALSE according to its own rules, you're making their meanings explicit to programmers, and you're guaranteeing consistency within your program and any other library (assuming the other library follows C standards ... you'd be amazed).


    Some History
    Some BASICs defined FALSE as 0 and TRUE as -1. Like many modern languages, they interpreted any non-zero value as TRUE, but they evaluated boolean expressions that were true as -1. Their NOT operation was implemented by adding 1 and flipping the sign, because it was efficient to do it that way. So 'NOT x' became -(x+1). A side effect of this is that a value like 5 evaluates to TRUE, but NOT 5 evaluates to -6, which is also TRUE! Finding this sort of bug is not fun.

    Best Practices
    Given the de facto rules that zero is interpreted as FALSE and any non-zero value is interpreted as TRUE, you should never compare boolean-looking expressions to TRUE or FALSE. Examples:

    if (thisValue == FALSE)  // Don't do this!
    if (thatValue == TRUE)   // Or this!
    if (otherValue != TRUE)  // Whatever you do, don't do this!
    

    Why? Because many programmers use the shortcut of treating ints as bools. They aren't the same, but compilers generally allow it. So, for example, it's perfectly legal to write

    if (strcmp(yourString, myString) == TRUE)  // Wrong!!!
    

    That looks legitimate, and the compiler will happily accept it, but it probably doesn't do what you'd want. That's because the return value of strcmp() is

          0 if yourString == myString
        <0 if yourString < myString
        >0 if yourString > myString

    So the line above returns TRUE only when yourString > myString.

    The right way to do this is either

    // Valid, but still treats int as bool.
    if (strcmp(yourString, myString))
    

    or

    // Better: lingustically clear, compiler will optimize.
    if (strcmp(yourString, myString) != 0)
    

    Similarly:

    if (someBoolValue == FALSE)     // Redundant.
    if (!someBoolValue)             // Better.
    return (x > 0) ? TRUE : FALSE;  // You're fired.
    return (x > 0);                 // Simpler, clearer, correct.
    if (ptr == NULL)                // Perfect: compares pointers.
    if (!ptr)                       // Sleazy, but short and valid.
    if (ptr == FALSE)               // Whatisthisidonteven.
    

    You'll often find some of these "bad examples" in production code, and many experienced programmers swear by them: they work, some are shorter than their (pedantically?) correct alternatives, and the idioms are almost universally recognized. But consider: the "right" versions are no less efficient, they're guaranteed to be portable, they'll pass even the strictest linters, and even new programmers will understand them.

    Isn't that worth it?

提交回复
热议问题