I have just written this short C++ program to approximate the actual number of clock ticks per second.
#include
#include
usi
C99 standard
The only thing the C99 N1256 standard draft says about CLOCKS_PER_SEC
is that:
CLOCKS_PER_SEC which expands to an expression with type clock_t (described below) that is the number per second of the value returned by the clock function
As others mention, POSIX sets it to 1 million, which limits the precision of this to 1 microsecond. I think this is just a historical value from days where the maximum CPU frequencies were measured in Mega Hertz.
clock
returns the amount of time spent in your program. There are 1,000,000 clock ticks per second total*. It appears that your program consumed 60% of them.
Something else used the other 40%.
*Okay, there are virtually 1,000,000 clock ticks per second. The actual number is normalized so your program perceives 1,000,000 ticks.
For example if you calculate:
(second_clock-first_clock)/CLOCKS_PER_SEC
you will get total time between first and second call to "clock()" function.
Well duh. You don't know how far into the current second you start the timing, do you? So you can get any result from 1 to CLOCKS_PER_SEC. Try this in your inner loop:
int first_time = time(NULL);
// Wait for timer to roll over before starting clock!
while(time(NULL) <= first_time) {}
int first_clock = clock();
first_time = time(NULL);
while(time(NULL) <= first_time) {}
int second_time = time(NULL);
int second_clock = clock();
cout << "Actual clocks per second = " << (second_clock - first_clock)/(second_time - first_time) << "\n";
cout << "CLOCKS_PER_SEC = " << CLOCKS_PER_SEC << "\n";
See ideone for full source code. It reports actual clocks per second as 1000000, as you would expect. (I had to reduce the number of iterations to 2, so that ideone didn't time out.)
From the man page of clock(3)
:
POSIX requires that CLOCKS_PER_SEC equals 1000000 independent of the actual resolution.
Your implementation seems to follow POSIX at least in that respect.
Running your program here, I get
Actual clocks per second = 980000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 990000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 1000000
CLOCKS_PER_SEC = 1000000
or similar output on an idle machine, and output like
Actual clocks per second = 50000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 600000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 530000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 580000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 730000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 730000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 600000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 560000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 600000
CLOCKS_PER_SEC = 1000000
Actual clocks per second = 620000
CLOCKS_PER_SEC = 1000000
on a busy machine. Since clock()
measures the (approximate) time spent in your program, it seems that you tested on a busy machine, and your program got only about 60% of the CPU time.
When you set int first_time = time(NULL);, time(NULL) might be "1 nanosecond away", (because it is truncated), from turning +1. Therefore, the: while(time(NULL) <= first_time) {} can be jumped over faster than 1 second, quicker than you expected.
That's why there are less clocks in that "1 second" of yours.