Timers to measure latency

馋奶兔 提交于 2019-11-30 17:06:16

问题


When measuring network latency (time ack received - time msg sent) in any protocol over TCP, what timer would you recommend to use and why? What resolution does it have? What are other advantages/disadvantages?

Optional: how does it work?

Optional: what timer would you NOT use and why?

I'm looking mostly for Windows / C++ solutions, but if you'd like to comment on other systems, feel free to do so.

(Currently we use GetTickCount(), but it's not a very accurate timer.)


回答1:


This is a copy of my answer from: C++ Timer function to provide time in nano seconds

For Linux (and BSD) you want to use clock_gettime().

#include <sys/time.h>

int main()
{
   timespec ts;
   // clock_gettime(CLOCK_MONOTONIC, &ts); // Works on FreeBSD
   clock_gettime(CLOCK_REALTIME, &ts); // Works on Linux
}

For windows you want to use the QueryPerformanceCounter. And here is more on QPC

Apparently there is a known issue with QPC on some chipsets, so you may want to make sure you do not have those chipset. Additionally some dual core AMDs may also cause a problem. See the second post by sebbbi, where he states:

QueryPerformanceCounter() and QueryPerformanceFrequency() offer a bit better resolution, but have different issues. For example in Windows XP, all AMD Athlon X2 dual core CPUs return the PC of either of the cores "randomly" (the PC sometimes jumps a bit backwards), unless you specially install AMD dual core driver package to fix the issue. We haven't noticed any other dual+ core CPUs having similar issues (p4 dual, p4 ht, core2 dual, core2 quad, phenom quad).




回答2:


You mentioned that you use GetTickCount(), so I'm going to recommend that you take a look at QueryPerformanceCounter().




回答3:


There is really no substitute for the rdtsc instruction. You cannot be sure of what resolution the QueryPerformanceCounter will support. Some have a very large granularity (low increment rate/frequency), some return nothing at all.

Instead, I recommend you use the rdtsc instruction. It does not require any OS implementation and returns the number of CPU internal clock cycles that have elapsed since the computer/processor/core was powered up. For a 3 GHz processor that's 3 billion increments per second - it doesn't get more precise than that, now does it? This instruction is available for x86-32 and -64 beginning with the Pentium or Pentium MMX. It should therefore be accessible from x86 Linuxes as well.

There are plenty of posts about it here on stackoverflow.com. I've written a few myself ...



来源:https://stackoverflow.com/questions/305615/timers-to-measure-latency

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!