Why does a C# System.Decimal remember trailing zeros?

后端 未结 3 1650
小鲜肉
小鲜肉 2020-12-06 04:55

Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with? See the following example:

public void DoSomething()
{         


        
相关标签:
3条回答
  • 2020-12-06 05:34

    Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.

    The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.

    0 讨论(0)
  • 2020-12-06 05:36

    It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".

    I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.

    I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.

    0 讨论(0)
  • 2020-12-06 05:43

    I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.

    Compare the SQL Server decimal and numeric column types for example.

    0 讨论(0)
提交回复
热议问题