I am using different kind of clocks to get the time on Linux systems:
rdtsc, gettim
Precision is the amount of information, i.e. the number of significant digits you report. (E.g. I am 2 m, 1.8 m, 1.83 m, and 1.8322 m tall. All those measurements are accurate, but increasingly precise.)
Accuracy is the relation between the reported information and the truth. (E.g. "I'm 1.70 m tall" is more precise than "1.8 m", but not actually accurate.)
Granularity or resolution are about the smallest time interval that the timer can measure. For example, if you have 1 ms granularity, there's little point reporting the result with nanosecond precision, since it cannot possibly be accurate to that level of precision.
On Linux, the available timers with increasing granularity are:
clock() from <time.h> (20 ms or 10 ms resolution?)
gettimeofday() from Posix <sys/time.h> (microseconds)
clock_gettime() on Posix (nanoseconds?)
In C++, the <chrono> header offers a certain amount of abstraction around this, and std::high_resolution_clock attempts to give you the best possible clock.