问题
I am printing microseconds continuously using gettimeofday(). As given in program output you can see that the time is not updated microsecond interval rather its repetitive for certain samples then increments not in microseconds but in milliseconds.
while(1)
{
gettimeofday(&capture_time, NULL);
printf(".%ld\n", capture_time.tv_usec);
}
Program output:
.414719
.414719
.414719
.414719
.430344
.430344
.430344
.430344
e.t.c
I want the output to increment sequentially like,
.414719
.414720
.414721
.414722
.414723
or
.414723, .414723+x, .414723+2x, .414723 +3x + ...+ .414723+nx
It seems that microseconds are not refreshed when I acquire it from capture_time.tv_usec.
================================= //Full Program
#include <iostream>
#include <windows.h>
#include <conio.h>
#include <time.h>
#include <stdio.h>
#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
#define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64
#else
#define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL
#endif
struct timezone
{
int tz_minuteswest; /* minutes W of Greenwich */
int tz_dsttime; /* type of dst correction */
};
timeval capture_time; // structure
int gettimeofday(struct timeval *tv, struct timezone *tz)
{
FILETIME ft;
unsigned __int64 tmpres = 0;
static int tzflag;
if (NULL != tv)
{
GetSystemTimeAsFileTime(&ft);
tmpres |= ft.dwHighDateTime;
tmpres <<= 32;
tmpres |= ft.dwLowDateTime;
/*converting file time to unix epoch*/
tmpres -= DELTA_EPOCH_IN_MICROSECS;
tmpres /= 10; /*convert into microseconds*/
tv->tv_sec = (long)(tmpres / 1000000UL);
tv->tv_usec = (long)(tmpres % 1000000UL);
}
if (NULL != tz)
{
if (!tzflag)
{
_tzset();
tzflag++;
}
tz->tz_minuteswest = _timezone / 60;
tz->tz_dsttime = _daylight;
}
return 0;
}
int main()
{
while(1)
{
gettimeofday(&capture_time, NULL);
printf(".%ld\n", capture_time.tv_usec);// JUST PRINTING MICROSECONDS
}
}
回答1:
The change in time you observe is 0.414719 s to 0.430344 s. The difference is 15.615 ms. The fact that the representation of the number is microsecond does not mean that it is incremented by 1 microsecond. In fact I would have expected 15.625 ms. This is the system time increment on standard hardware. I've given a closer look here and here. This is called granularity of the system time.
Windows:
However, there is a way to improve this, a way to reduce the granularity: The Multimedia Timers. Particulary Obtaining and Setting Timer Resolution will disclose a way to increase the systems interrupt frequency.
The code:
#define TARGET_PERIOD 1 // 1-millisecond target interrupt period
TIMECAPS tc;
UINT wTimerRes;
if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR)
// this call queries the systems timer hardware capabilities
// it returns the wPeriodMin and wPeriodMax with the TIMECAPS structure
{
// Error; application can't continue.
}
// finding the minimum possible interrupt period:
wTimerRes = min(max(tc.wPeriodMin, TARGET_PERIOD ), tc.wPeriodMax);
// and setting the minimum period:
timeBeginPeriod(wTimerRes);
This will force the system to run at its maximum interrupt frequency. As a consequence
also the update of the system time will happen more often and the granularity of the system time increment will be close to 1 milisecond
on most systems.
When you deserve resolution/granularity beyond this, you'd have to look into QueryPerformanceCounter. But this is to be used with care when using it over longer periods of time. The frequency of this counter can be obtained by a call to QueryPerformanceFrequency. The OS considers this frequency as a constant and will give the same value all time. However, some hardware produces this frequency and the true frequency differs from the given value. It has an offset and it shows thermal drift. Thus the error shall be assumed in the range of several to many microseconds/second. More details about this can be found in the second "here" link above.
Linux:
The situation looks somewhat different for Linux. See this to get an idea. Linux mixes information of the CMOS clock using the function getnstimeofday (for seconds since epoch) and information from a high freqeuncy counter (for the microseconds) using the function timekeeping_get_ns. This is not trivial and is questionable in terms of accuracy since both sources are backed by different hardware. The two sources are not phase locked, thus it is possible to get more/less than one million microseconds per second.
回答2:
The Windows system clock only ticks every few milliseconds -- in your case 64 times per second, so when it does tick it increases the system time by 15.625 ms.
The solution is to use a higher-resolution timer that the system time (QueryPerformanceCounter
).
You still won't see .414723, .414723+x, .414723+2x, .414723 +3x + ...+ .414723+nx, though, because you code will not run exactly once every x
microseconds. It will run as fast as it can, but there's no particular reason that should always be a constant speed, or that if it is then it's an integer number of microseconds.
回答3:
I recommend you to look at the C++11 <chrono> header.
high_resolution_clock
(C++11) the clock with the shortest tick period available
The tick period referred to here is the frequency at which the clock is updated. If we look in more details:
template< class Rep, class Period = std::ratio<1> > class duration;
Class template
std::chrono::duration
represents a time interval.It consists of a count of ticks of type Rep and a tick period, where the tick period is a compile-time rational constant representing the number of seconds from one tick to the next.
Previously, functions like gettimeofday
would give you a time expressed in microseconds, however they would utterly fail to tell you the interval at which this time expression was refreshed.
In the C++11 Standard, this information is now in the clear, to make it obvious that there is no relation between the unit in which the time is expressed and the tick period. And that, therefore, you definitely need to take both into accounts.
The tick period is extremely important when you want to measure durations that are close to it. If the duration you wish to measure is inferior to the tick period, then you will measure it "discretely" like you observed: 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, ... I advise caution at this point.
回答4:
This is because the process running your code isn't always scheduled to execute.
Whilst it does, it will bang round the loop quickly, printing multiple values for each microsecond - which is a comparatively long period of time on modern CPUs.
There are then periods where it is not scheduled to execute by the system, and therefore cannot print values.
If what you want to do is execute every microsecond, this may be possible with some real-time operating systems running on high performance hardware.
来源:https://stackoverflow.com/questions/13175573/why-is-microsecond-timestamp-is-repetitive-using-a-private-gettimeoftheday-i