C# performance analysis- how to count CPU cycles?

帅比萌擦擦* 提交于 2019-12-20 03:09:17

问题


Is this a valid way to do performance analysis? I want to get nanosecond accuracy and determine the performance of typecasting:

class PerformanceTest
{
    static double last = 0.0;
    static List<object> numericGenericData = new List<object>();
    static List<double> numericTypedData = new List<double>();

    static void Main(string[] args)
    {
        double totalWithCasting = 0.0;
        double totalWithoutCasting = 0.0;
        for (double d = 0.0; d < 1000000.0; ++d)
        {
            numericGenericData.Add(d);
            numericTypedData.Add(d);
        }
        Stopwatch stopwatch = new Stopwatch();
        for (int i = 0; i < 10; ++i)
        {

            stopwatch.Start();
            testWithTypecasting();
            stopwatch.Stop();
            totalWithCasting += stopwatch.ElapsedTicks;

            stopwatch.Start();
            testWithoutTypeCasting();
            stopwatch.Stop();
            totalWithoutCasting += stopwatch.ElapsedTicks;
        }

        Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/10));
        Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting/10));
        Console.ReadKey();
    }

    static void testWithTypecasting()
    {
        foreach (object o in numericGenericData)
        {
            last = ((double)o*(double)o)/200;
        }
    }

    static void testWithoutTypeCasting()
    {
        foreach (double d in numericTypedData)
        {
            last = (d * d)/200;
        }
    }
}

The output is:

Avg with typecasting = 468872.3
Avg without typecasting = 501157.9

I'm a little suspicious... it looks like there is nearly no impact on the performance. Is casting really that cheap?

Update:

class PerformanceTest
{
    static double last = 0.0;
    static object[] numericGenericData = new object[100000];
    static double[] numericTypedData = new double[100000];

    static Stopwatch stopwatch = new Stopwatch();
    static double totalWithCasting = 0.0;
    static double totalWithoutCasting = 0.0;
    static void Main(string[] args)
    {
        for (int i = 0; i < 100000; ++i)
        {
            numericGenericData[i] = (double)i;
            numericTypedData[i] = (double)i;
        }

        for (int i = 0; i < 10; ++i)
        {
            stopwatch.Start();
            testWithTypecasting();
            stopwatch.Stop();
            totalWithCasting += stopwatch.ElapsedTicks;
            stopwatch.Reset();

            stopwatch.Start();
            testWithoutTypeCasting();
            stopwatch.Stop();
            totalWithoutCasting += stopwatch.ElapsedTicks;
            stopwatch.Reset();
        }

        Console.WriteLine("Avg with typecasting = {0}", (totalWithCasting/(10.0)));
        Console.WriteLine("Avg without typecasting = {0}", (totalWithoutCasting / (10.0)));
        Console.ReadKey();
    }

    static void testWithTypecasting()
    {
        foreach (object o in numericGenericData)
        {
            last = ((double)o * (double)o) / 200;
        }
    }

    static void testWithoutTypeCasting()
    {
        foreach (double d in numericTypedData)
        {
            last = (d * d) / 200;
        }
    }
}

The output is:

Avg with typecasting = 4791
Avg without typecasting = 3303.9

回答1:


Note that it's not typecasting that you are measuring, it's unboxing. The values are doubles all along, there is no type casting going on.

You forgot to reset the stopwatch between tests, so you are adding the accumulated time of all previous tests over and over. If you convert the ticks to actual time, you see that it adds up to much more than the time it took to run the test.

If you add a stopwatch.Reset(); before each stopwatch.Start();, you get a much more reasonable result like:

Avg with typecasting = 41027,1
Avg without typecasting = 20594,3

Unboxing a value is not so expensive, it only has to check that the data type in the object is correct, then get the value. Still it's a lot more work than when the type is already known. Remember that you are also measuring the looping, calculation and assigning of the result, which is the same for both tests.

Boxing a value is more expensive than unboxing it, as that allocates an object on the heap.




回答2:


1) Yes, casting is usually (very) cheap.

2) You are not going to get nanosecond accuracy in a managed language. Or in an unmanaged language under most operating systems.

Consider

  • other processes
  • garbage collection
  • different JITters
  • different CPUs

And, your measurement includes the foreach loop, looks like 50% or more to me. Maybe 90%.




回答3:


When you call Stopwatch.Start it is letting the timer continue to run from wherever it left off. You need to call Stopwatch.Reset() to set the timers back to zero before starting again. Personally I just use stopwatch = Stopwatch.StartNew() whenever I want to start a timer to avoid this sort of confusion.

Furthermore, you probably want to call both of your test methods before starting the "timing loop" so that they get a fair chance to "warm up" that piece of code and ensure that the JIT has had a chance to run to even the playing field.

When I do that on my machine, I see that testWithTypecasting runs in approximately half the time as testWithoutTypeCasting.

That being said however, the cast itself it not likely to be the most significant part of that performance penalty. The testWithTypecasting method is operating on a list of boxed doubles which means that there is an additional level of indirection required to retrieve each value (follow a reference to the value somewhere else in memory) in addition to increasing the total amount of memory consumed. This increases the amount of time spent on memory access and is likely to be a bigger effect than the CPU time spent "in the cast" itself.




回答4:


Look into performance counters in the System.Diagnostics namespace, When you create a new counter, you first create a category, and then specify one or more counters to be placed in it.

    // Create a collection of type CounterCreationDataCollection.
System.Diagnostics.CounterCreationDataCollection CounterDatas = 
   new System.Diagnostics.CounterCreationDataCollection();
// Create the counters and set their properties.
System.Diagnostics.CounterCreationData cdCounter1 = 
   new System.Diagnostics.CounterCreationData();
System.Diagnostics.CounterCreationData cdCounter2 = 
   new System.Diagnostics.CounterCreationData();
cdCounter1.CounterName = "Counter1";
cdCounter1.CounterHelp = "help string1";
cdCounter1.CounterType = System.Diagnostics.PerformanceCounterType.NumberOfItems64;
cdCounter2.CounterName = "Counter2";
cdCounter2.CounterHelp = "help string 2";
cdCounter2.CounterType = System.Diagnostics.PerformanceCounterType.NumberOfItems64;
// Add both counters to the collection.
CounterDatas.Add(cdCounter1);
CounterDatas.Add(cdCounter2);
// Create the category and pass the collection to it.
System.Diagnostics.PerformanceCounterCategory.Create(
   "Multi Counter Category", "Category help", CounterDatas);

see MSDN docs




回答5:


Just a thought but sometimes identical machine code can take a different number of cycles to execute depending on its alignment in memory so you might want to add a control or controls.




回答6:


Don't "do" C# myself but in C for x86-32 and later the rdtsc instruction is usually available which is much more accurate than OS ticks. More info on rdtsc can be found by searching stackoverflow. Under C it is usually available as an intrinsic or built-in function and returns the number of clock cycles (in an 8 byte - long long/__int64 - unsigned integer) since the computer was powered up. So if the CPU has a clock speed of 3 Ghz the underlying counter is incremented 3 billion times per second. Save for a few early AMD processors, all multi-core CPUs will have their counters synchronized.

If C# does not have it you might consider writing a VERY short C function to access it from C#. There is a great deal of overhead if you access the instruction through a function vs inline. The difference between two back-to-back calls to the function will be the basic measurement overhead. If you're thinking of metering your application you'll have to determine several more complex overhead values.

You might consider shutting off the CPU energy-saving mode (and restarting the PC) as it lowers the clock frequency being fed to the CPU during periods of low activity. This is since it causes the time stamp counters of the different cores to become un-synchronized.



来源:https://stackoverflow.com/questions/2466629/c-sharp-performance-analysis-how-to-count-cpu-cycles

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!