Why are integer types promoted during addition in C?

前端 未结 4 1350
礼貌的吻别
礼貌的吻别 2020-12-06 09:45

So we had a field issue, and after days of debugging, narrowed down the problem to this particular bit of code, where the processing in a while loop wasn\'t happening :

4条回答
  •  离开以前
    2020-12-06 10:00

    When the C language was being developed, it was desirable to minimize the number of kinds of arithmetic compilers had to deal with. Thus, most math operators (e.g. addition) supported only int+int, long+long, and double+double. While the language could have been simplified by omitting int+int (promoting everything to long instead), arithmetic on long values generally takes 2-4 times as much code as arithmetic on int values; since most programs are dominated by arithmetic on int types, that would have been very costly. Promoting float to double, by contrast, will in many cases save code, because it means that only two functions are needed to support float: convert to double, and convert from double. All other floating-point arithmetic operations need only support one floating-point type, and since floating-point math is often done by calling library routines the cost of calling a routine to add two double values is often the same as the cost to call a routine to add two float values.

    Unfortunately, the C language became widespread on a variety of platforms before anyone really figured out what 0xFFFF + 1 should mean, and by that time there were already some compilers where the expression yielded 65536 and some where it yielded zero. Consequently, writers of standards have endeavored to write them in a fashion that would allow compilers to keep on doing whatever they were doing, but which was rather unhelpful from the standpoint of anyone hoping to write portable code. Thus, on platforms where int is 32 bits, 0xFFFF+1 will yield 65536, and on platforms where int is 16 bits, it will yield zero. If on some platform int happened to be 17 bits, 0xFFFF+1 would authorize the compiler to negate the laws of time and causality [btw, I don't know if any 17-bit platforms, but there are some 32-bit platforms where uint16_t x=0xFFFF; uint16_t y=x*x; will cause the compiler to garble the behavior of code which precedes it].

提交回复
热议问题