I\'m surprised by C# compiler behavior in the following example:
int i = 1024;
uint x = 2048;
x = x+i; // A error CS0266: Cannot implicitly convert type
The numerical promotion rules for C# are loosely based upon those of Java and C, which work by identifying a type to which both operands can be converted and then making the result be the same type. I think such an approach was reasonable in the 1980s, but newer languages should set it aside in favor of one that looks at how values are used (e.g. If I were designing a language, then given Int32 i1,i2,i3; Int64 l;
a compiler would process i4=i1+i2+i3;
using 32-bit math [throwing an exception in case of overflow] would would process l=i1+i2+i3;
with 64-bit math.) but the C# rules are what they are and don't seem likely to change.
It should be noted that the C# promotion rules by definition always select the overloads which are deemed "most suitable" by the language specification, but that doesn't mean they're really the most suitable for any useful purpose. For example, double f=1111111100/11111111.0f;
would seem like it should yield 100.0, and it would be correctly computed if both operands were promoted to double
, but the compiler will instead convert the integer 1111111100 to float
yielding 1111111040.0f, and then perform the division yielding 99.999992370605469.