Time a function in C++

删除回忆录丶 提交于 2019-12-08 06:20:55

问题


I'd like to time how long a function takes in C++ in milliseconds.

Here's what I have:

#include<iostream>
#include<chrono>           
using timepoint = std::chrono::steady_clock::time_point;

float elapsed_time[100];

// Run function and count time
for(int k=0;k<100;k++) {

    // Start timer
    const timepoint clock_start = chrono::system_clock::now();

    // Run Function
    Recursive_Foo();

    // Stop timer
    const timepoint clock_stop = chrono::system_clock::now();

    // Calculate time in milliseconds
    chrono::duration<double,std::milli> timetaken = clock_stop - clock_start;
    elapsed_time[k] = timetaken.count();
}

for(int l=0;l<100;l++) {
    cout<<"Array: "<<l<<" Time: "<<elapsed_time[l]<<" ms"<<endl;
}

This compiles but I think multithreading is preventing it from working properly. The output produces times in irregular intervals, e.g.:

Array: 0 Time: 0 ms
Array: 1 Time: 0 ms
Array: 2 Time: 15.6 ms
Array: 3 Time: 0 ms
Array: 4 Time: 0 ms
Array: 5 Time: 0 ms
Array: 6 Time: 15.6 ms
Array: 7 Time: 0 ms
Array: 8 Time: 0 ms

Do I need to use some kind of mutex lock? Or is there an easier way to time how many milliseconds a function took to execute?

EDIT

Maybe people are suggesting using high_resolution_clock or steady_clock, but all three produce the same irregular results.

This solution seems to produce real results: How to use QueryPerformanceCounter? but it's not clear to me why. Also, https://gamedev.stackexchange.com/questions/26759/best-way-to-get-elapsed-time-in-miliseconds-in-windows works well. Seems to be a Windows implementation issue.


回答1:


Microsoft has a nice, clean solution in microseconds, via: MSDN

#include <windows.h>

LONGLONG measure_activity_high_resolution_timing()
{
    LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
    LARGE_INTEGER Frequency;

    QueryPerformanceFrequency(&Frequency); 
    QueryPerformanceCounter(&StartingTime);

    // Activity to be timed

    QueryPerformanceCounter(&EndingTime);
    ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;


    //
    // We now have the elapsed number of ticks, along with the
    // number of ticks-per-second. We use these values
    // to convert to the number of elapsed microseconds.
    // To guard against loss-of-precision, we convert
    // to microseconds *before* dividing by ticks-per-second.
    //

    ElapsedMicroseconds.QuadPart *= 1000000;
    ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
    return ElapsedMicroseconds.QuadPart;
}



回答2:


Profile code using a high-resolution timer, not the system-clock; which, as you're seeing, has a very limited granularity.

http://www.cplusplus.com/reference/chrono/high_resolution_clock/

typedef tp high_resolution_clock::time_point

const tp start = high_resolution_clock::now();
// do stuff
const tp end   = high_resolution_clock::now();



回答3:


If you suspect that some other process or thread in your app is taking too much CPU time then use:

GetThreadTimes under windows

or

clock_gettime with CLOCK_THREAD_CPUTIME_ID under linux

to measure threads CPU time your function was being executed. This will exclude from your measurements time other threads/processes were executed during profiling.



来源:https://stackoverflow.com/questions/33336049/time-a-function-in-c

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!