Python's time.clock() vs. time.time() accuracy?
Previously in 2.7, according to the time module docs:
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.
Additionally, there is the timeit module for benchmarking code snippets.
The short answer is: most of the time
time.clock() will be better.However, if you're timing some hardware (for example some algorithm you put in the GPU), then
time.clock() will get rid of this time and
time.time() is the only solution left.
Note: whatever the method used, the timing will depend on factors you cannot control (when will the process switch, how often, ...), this is worse with
time.time() but exists also with
time.clock(), so you should never run one timing test only, but always run a series of test and look at mean/variance of the times.