Sub-millisecond precision timing in C or C++

走远了吗. 提交于 2019-12-17 12:36:16

问题


What techniques / methods exist for getting sub-millisecond precision timing data in C or C++, and what precision and accuracy do they provide? I'm looking for methods that don't require additional hardware. The application involves waiting for approximately 50 microseconds +/- 1 microsecond while some external hardware collects data.

EDIT: OS is Wndows, probably with VS2010. If I can get drivers and SDK's for the hardware on Linux, I can go there using the latest GCC.


回答1:


When dealing with off-the-shelf operating systems, accurate timing is an extremely difficult and involved task. If you really need guaranteed timing, the only real option is a full real-time operating system. However if "almost always" is good enough, here are a few tricks you can use that will provide good accuracy under commodity Windows & Linux

  1. Use a Sheilded CPU Basically, this means turn off IRQ affinity for a selected CPU & set the processor affinity mask for all other processes on the machine to ignore your targeted CPU. On your app, set the CPU affinity to run only on your shielded CPU. Effectively, this should prevent the OS from ever suspending your app as it will always be the only runnable process for that CPU.
  2. Never allow let your process willingly yield control to the OS (which is inherently non-deterministic for non realtime OSes). No memory allocation, no sockets, no mutexes, nada. Use the RDTSC to spin in a while loop waiting for your target time to arrive. It'll consume 100% CPU but it's the most accurate way to go.
  3. If number 2 is a bit too draconic, you can 'sleep short' and then burn the CPU up to your target time. Here, you take advantage of the fact that the OS schedules the CPU at set intervals. Usually 100 times per second or 1000 times per second depending on your OS and configuration (On windows you can change the default scheduling period of 100/s to 1000/s using the multimedia API). This can be a little hard to get right but essentially you need determine when the OS scheduling periods occur and calculate the one prior to your target wake time. Sleep for this duration and then, upon waking, spin on RDTSC (if you're on a single CPU... use QueryPerformanceCounter or the Linux equivalent if not) until your target time arrives. Occasionally, OS scheduling will cause you to miss but, generally speaking, this mechanism works pretty good.

It seems like a simple question, but attaining 'good' timing get's exponentially more difficult the tighter your timing constraints are. Good luck!




回答2:


The hardware (and therefore resolution) varies from machine to machine. On Windows, specifically (I'm not sure about other platforms), you can use QueryPerformanceCounter and QueryPerformanceFrequency, but be aware you should call both from the same thread and there are no strict guarantees about resolution (QueryPerformanceFrequency is allowed to return 0 meaning no high resolution timer is available). However, on most modern desktops, there should be one accurate to microseconds.




回答3:


boost::datetime has microsecond precision clock but its accuracy depends on the platform.

The documentation states:

ptime microsec_clock::local_time() "Get the local time using a sub second resolution clock. On Unix systems this is implemented using GetTimeOfDay. On most Win32 platforms it is implemented using ftime. Win32 systems often do not achieve microsecond resolution via this API. If higher resolution is critical to your application test your platform to see the achieved resolution."

http://www.boost.org/doc/libs/1_43_0/doc/html/date_time/posix_time.html#date_time.posix_time.ptime_class




回答4:


You may try the following:

struct timeval t; gettimeofday(&t,0x0);

This gives you current timestamp in micro-seconds. I am not sure about the accuracy.




回答5:


You could try the technique described here, but it's not portable.




回答6:


Most modern processors have registers for timing or other instrumentation purposes. On x86 since Pentium days there is the RDTSC instruction, for example. You compiler may give you access to this instruction.

See wikipedia for more info.




回答7:


timeval in sys/time.h has a member 'tv_usec' which is microseconds.

This link and the code below will help illustrate:

http://www.opengroup.org/onlinepubs/000095399/basedefs/sys/time.h.html

timeval start;
timeval finish;

long int sec_diff;
long int mic_diff;

gettimeofday(&start, 0);
cout << "whooo hooo" << endl;
gettimeofday(&finish, 0);

sec_diff = finish.tv_sec - start.tv_sec;
mic_diff = finish.tv_usec - start.tv_usec;

cout << "cout-ing 'whooo hooo' took " << sec_diff << "seconds and " << mic_diff << " micros." << endl;

gettimeofday(&start, 0);
printf("whooo hooo\n");
gettimeofday(&finish, 0);

sec_diff = finish.tv_sec - start.tv_sec;
mic_diff = finish.tv_usec - start.tv_usec;

cout << "fprint-ing 'whooo hooo' took " << sec_diff << "seconds and " << mic_diff << " micros." << endl;



回答8:


Good luck trying to do that with MS Windows. You need a realtime operating system, that is to say, one where timing is guaranteed repeatable. Windows can switch to another thread or even another process at an inopportune moment. You will also have no control over cache misses.

When I was doing realtime robotic control, I used a very lightweight OS called OnTime RTOS32, which has a partial Windows API emulation layer. I do not know if it would be suitable for what you are doing. However, with Windows, you will probably never be able to prove that it will never fail to give the timely response.




回答9:


A combination of GetSystemTimeAsFileTime and QueryPerformanceCounter can result in a reliable suite of code to obtain microsecond resolution time services on windows.

See this comment in another thread here.



来源:https://stackoverflow.com/questions/2904887/sub-millisecond-precision-timing-in-c-or-c

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!