The new C++ standard still refuses to specify the binary representation of integer types. Is this because there are real-world implementations of C++ that don\'t use 2\'s co
I suppose it is because the Standard says, in 3.9.1[basic.fundamental]/7
this International Standard permits 2’s complement, 1’s complement and signed magnitude representations for integral types.
which, I am willing to bet, came along from the C programming language, which lists sign and magnitude, two's complement, and one's complement as the only allowed representations in 6.2.6.2/2
. And there sure were 1's complement systems around when C was wide-spread: UNIVACs are the most often mentioned, it seems.