I can give it floating point numbers, such as
time.sleep(0.5)
but how accurate is it? If i give it
time.sleep(0.05)
People are quite right about the differences between operating systems and kernels, but I do not see any granularity in Ubuntu and I see a 1 ms granularity in MS7. Suggesting a different implementation of time.sleep, not just a different tick rate. Closer inspection suggests a 1μs granularity in Ubuntu by the way, but that is due to the time.time function that I use for measuring the accuracy.