Addition of int and uint

前端 未结 6 1067
爱一瞬间的悲伤
爱一瞬间的悲伤 2020-12-06 09:30

I\'m surprised by C# compiler behavior in the following example:

int i = 1024;
uint x = 2048;
x = x+i;     // A error CS0266: Cannot implicitly convert type          


        
6条回答
  •  一向
    一向 (楼主)
    2020-12-06 10:13

    Why int + int = int and uint + uint = uint, but int + uint = long? What is the motivation for this decision?

    The way the question is phrased implies the presupposition that the design team wanted int + uint to be long, and chose type rules to attain that goal. That presupposition is false.

    Rather, the design team thought:

    • What mathematical operations are people most likely to perform?
    • What mathematical operations can be performed safely and efficiently?
    • What conversions between numeric types can be performed without loss of magnitude and precision?
    • How can the rules for operator resolution be made both simple and consistent with the rules for method overload resolution?

    As well as many other considerations such as whether the design works for or against debuggable, maintainable, versionable programs, and so on. (I note that I was not in the room for this particular design meeting, as it predated my time on the design team. But I have read their notes and know the kinds of things that would have concerned the design team during this period.)

    Investigating these questions led to the present design: that arithmetic operations are defined as int + int --> int, uint + uint --> uint, long + long --> long, int may be converted to long, uint may be converted to long, and so on.

    A consequence of these decisions is that when adding uint + int, overload resolution chooses long + long as the closest match, and long + long is long, therefore uint + int is long.

    Making uint + int have some more different behavior that you might consider more sensible was not a design goal of the team at all because mixing signed and unsigned values is first, rare in practice, and second, almost always a bug. The design team could have added special cases for every combination of signed and unsigned one, two, four, and eight byte integers, as well as char, float, double and decimal, or any subset of those many hundreds of cases, but that works against the goal of simplicity.

    So in short, on the one hand we have a large amount of design work to make a feature that we want no one to actually use easier to use at the cost of a massively complicated specification. On the other hand we have a simple specification that produces an unusual behavior in a rare case we expect no one to encounter in practice. Given those choices, which would you choose? The C# design team chose the latter.

提交回复
热议问题