When should I use double instead of decimal?

后端 未结 12 1694
Happy的楠姐
Happy的楠姐 2020-11-22 10:51

I can name three advantages to using double (or float) instead of decimal:

  1. Uses less memory.
  2. Faster because flo
12条回答
  •  醉梦人生
    2020-11-22 11:54

    You seem spot on with the benefits of using a floating point type. I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs. In those cases, I will "down cast" to double or float, but only do it internally, and carefully try to manage precision loss by limiting the number of significant digits in the mathematical operation being performed.

    In general, if your value is transient (not reused), you're safe to use a floating point type. The real problem with floating point types is the following three scenarios.

    1. You are aggregating floating point values (in which case the precision errors compound)
    2. You build values based on the floating point value (for example in a recursive algorithm)
    3. You are doing math with a very wide number of significant digits (for example, 123456789.1 * .000000000000000987654321)

    EDIT

    According to the reference documentation on C# decimals:

    The decimal keyword denotes a 128-bit data type. Compared to floating-point types, the decimal type has a greater precision and a smaller range, which makes it suitable for financial and monetary calculations.

    So to clarify my above statement:

    I tend to design for decimals in all cases, and rely on a profiler to let me know if operations on decimal is causing bottlenecks or slow-downs.

    I have only ever worked in industries where decimals are favorable. If you're working on phsyics or graphics engines, it's probably much more beneficial to design for a floating point type (float or double).

    Decimal is not infinitely precise (it is impossible to represent infinite precision for non-integral in a primitive data type), but it is far more precise than double:

    • decimal = 28-29 significant digits
    • double = 15-16 significant digits
    • float = 7 significant digits

    EDIT 2

    In response to Konrad Rudolph's comment, item # 1 (above) is definitely correct. Aggregation of imprecision does indeed compound. See the below code for an example:

    private const float THREE_FIFTHS = 3f / 5f;
    private const int ONE_MILLION = 1000000;
    
    public static void Main(string[] args)
    {
        Console.WriteLine("Three Fifths: {0}", THREE_FIFTHS.ToString("F10"));
        float asSingle = 0f;
        double asDouble = 0d;
        decimal asDecimal = 0M;
    
        for (int i = 0; i < ONE_MILLION; i++)
        {
            asSingle += THREE_FIFTHS;
            asDouble += THREE_FIFTHS;
            asDecimal += (decimal) THREE_FIFTHS;
        }
        Console.WriteLine("Six Hundred Thousand: {0:F10}", THREE_FIFTHS * ONE_MILLION);
        Console.WriteLine("Single: {0}", asSingle.ToString("F10"));
        Console.WriteLine("Double: {0}", asDouble.ToString("F10"));
        Console.WriteLine("Decimal: {0}", asDecimal.ToString("F10"));
        Console.ReadLine();
    }
    

    This outputs the following:

    Three Fifths: 0.6000000000
    Six Hundred Thousand: 600000.0000000000
    Single: 599093.4000000000
    Double: 599999.9999886850
    Decimal: 600000.0000000000
    

    As you can see, even though we are adding from the same source constant, the results of the double is less precise (although probably will round correctly), and the float is far less precise, to the point where it has been reduced to only two significant digits.

提交回复
热议问题