Mathematically determine the precision and scale of a decimal value

后端 未结 3 1498
情话喂你
情话喂你 2021-01-18 00:10

I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers,

3条回答
  •  青春惊慌失措
    2021-01-18 00:54

    First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.


    Now, there are 2 fundamental ways to determine each digit (and thus, their number):

    • get+interpret the meaningful parts
    • calculate mathematically

    The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.

    For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().

    1. ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.

      • E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
        • regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
    2. The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:

      public static int[] GetBits(decimal d)
      {
          return new int[]
          {
              d.lo,
              d.mid,
              d.hi,
              d.flags
          };
      }
      

      And their semantics are:

      • |high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
      • flags:
        • bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
          • (thus (flags>>16)&0xFF is the raw value of this field)
        • bit 31 - sign (doesn't concern us)

      as you can see, this is very similar to IEEE 754 floats.

      So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.

提交回复
热议问题