I have a co-worker that maintains that TRUE used to be defined as 0 and all other values were FALSE. I could swear that every language I\'ve worked with, if you could even
Several functions in the C standard library return an 'error code' integer as result. Since noErr is defined as 0, a quick check can be 'if it's 0, it's Ok'. The same convention carried to a Unix process' 'result code'; that is, an integer that gave some inidication about how a given process finished.
In Unix shell scripting, the result code of a command just executed is available, and tipically used to signify if the command 'succeeded' or not, with 0 meaning success, and anything else a specific non-success condition.
From that, all test-like constructs in shell scripts use 'success' (that is, a result code of 0) to mean TRUE, and anything else to mean FALSE.
On a totally different plane, digital circuits frecuently use 'negative logic'. that is, even if 0 volts is called 'binary 0' and some positive value (commonly +5v or +3.3v, but nowadays it's not rare to use +1.8v) is called 'binary 1', some events are 'asserted' by a given pin going to 0. I think there's some noise-resistant advantages, but i'm not sure about the reasons.
Note, however that there's nothing 'ancient' or some 'switching time' about this. Everything I know about this is based on old conventions, but are totally current and relevant today.