Calculating runtime differences in time realtime

末鹿安然 提交于 2019-12-25 04:33:44

问题


I got the following problem: I have to measure the time a program needs to be executed. A scalar version of the program works fine with the code below, but when using OpenMP, it works on my PC, but not on the resource I am supposed to use. In fact:

  • scalar program rt 34s
  • openmp program rt 9s

thats my pc (everything working) -compiled with visual studio the ressource I have to use (I think Linux, compiled with gcc):

  • scalar program rt 9s
  • openmp program rt 9s (but the text pops immediately afterwards up, so it should be 0-1s)

my gues is, that it adds all ticks, which is about the same amount and devides them by the tick rate of a single core. My question is how to solve this, and if there is a better way to watch the time in the console on c.

      clock_t start, stop;
double t = 0.0;
        assert((start = clock()) != -1);
      ... code running
        t = (double)(stop - start) / CLOCKS_PER_SEC;
printf("Run time: %f\n", t);

回答1:


To augment Mark's answer: DO NOT USE clock()

clock() is an awful misunderstanding from the old computer era, who's actual implementation differs greatly from platform to platform. Behold:

  • on Linux, *BSD, Darwin (OS X) -- and possibly other 4.3BSD descendants -- clock() returns the processor time (not the wall-clock time!) used by the calling process, i.e. the sum of each thread's processor time;
  • on IRIX, AIX, Solaris -- and possibly other SysV descendants -- clock() returns the processor time (again not the wall-clock time) used by the calling process AND all its terminated child processes for which wait, system or pclose was executed;
  • HP-UX doesn't even seem to implement clock();
  • on Windows clock() returns the wall-clock time (not the processor time).

In the descriptions above processor time usually means the sum of user and system time. This could be less than the wall-clock (real) time, e.g. if the process sleeps or waits for file IO or network transfers, or it could be more than the wall-clock time, e.g. when the process has more than one thread, actively using the CPU.

Never use clock(). Use omp_get_wtime() - it exists on all platforms, supported by OpenMP, and always returns the wall-clock time.




回答2:


Converting my earlier comment to an answer in the spirit of doing anything for reputation ...

Use two calls to omp_get_wtime to get the wallclock time (in seconds) between two points in your code. Note that time is measured individually on each thread, there is no synchronisation of clocks across threads.




回答3:


Your problem is clock. By the C standard it measures the time passed on the CPU for your process, not wall clock time. So this is what linux does (usually they stick to the standards) and then the total CPU time for the sequential program or the parallel program are the same, as they should be.

Windows OS deviate from that, in that there clock is the wall clock time.

So use other time measurement functions. For standard C this would be time or if you need more precision with the new C11 standard you could use timespec_get, for OpenMP there are other possibilities as have already be mentioned.



来源:https://stackoverflow.com/questions/20094547/calculating-runtime-differences-in-time-realtime

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!