distinguishes between signed and unsigned in machine code
问题 I was reading a text book saying: It is important to note how machine code distinguishes between signed and unsigned values. Unlike in C, it does not associate a data type with each program value. Instead, it mostly uses the same (assembly)instructions for the two cases, because many arithmetic operations have the same bit-level behavior for unsigned and two’s-complement arithmetic. I don't understand what it means, could anyone provide me an example? 回答1: For example, this code: int main() {