I am writing a function for getting the average of the clocks it takes to call a specific void (*)(void)
aka void -> void
function a specific number of times.
I am worried that it if the sample size gets too large, the sum of the observations will overflow and make the average invalid.
is there a standard approach to removing the possibility of sum overflowing in these kinds of problems?
Note: I understand that this example is too naive to conclude anything about performance; I am interested eliminating the possibility of sum overflow, not concluding anything performance wise.
Note2: I also understand that a 64 bit unsigned number will not realistically overflow unless the program is run for hundreds of years, but I am curious if it is possible to eliminate this assumption too.
Here is my self contained code:
#include <Windows.h>
#include <stdio.h>
/**
* i want to parametrize the type which is used to store sample size
* to see whether it impacts performance
*/
template <typename sampleunit_t>
static inline ULONGLONG AveragePerformanceClocks (void (*f)(), sampleunit_t nSamples)
{
ULONGLONG sum;
sampleunit_t i;
sum = 0;
for (i = 0; i < nSamples; ++i) {
LARGE_INTEGER t1;
LARGE_INTEGER t2;
ULONGLONG dt;
QueryPerformanceCounter(&t1);
f();
QueryPerformanceCounter(&t2);
dt = t2.QuadPart - t1.QuadPart;
// sum may possibly overflow if program runs long enough with
// a large enough nSamples
sum += dt;
}
return (ULONGLONG)(sum / nSamples);
}
/* a cdecl callback that consumes time */
static void test1()
{
// don't optimize
volatile int i;
for (i = 0; i < 10000; ++i) {
}
}
int main(int argc, char **argv)
{
ULONGLONG avg;
avg = AveragePerformanceClocks<BYTE>(test1, 255);
printf("average clocks(truncated): %llu.\n", avg);
avg = AveragePerformanceClocks<WORD>(test1, 255);
printf("average clocks(truncated): %llu.\n", avg);
avg = AveragePerformanceClocks<DWORD>(test1, 255);
printf("average clocks(truncated): %llu.\n", avg);
avg = AveragePerformanceClocks<ULONGLONG>(test1, 255);
printf("average clocks(truncated): %llu.\n", avg);
system("pause");
return 0;
}
The average of the first n elements is
SUM
Average = ---
n
The next element Mi is
(SUM + Mi)
Average2 = ----------
n + 1
So given the current average, it is possible to find the next average with the new reading.
(Average * n + Mi )
Average2 = -------------------
n + 1
This can then be changed to an equation which doesn't increase
n Mi
Average2 = Average * ----- + -----
n + 1 n + 1
In practice for timing, the size of time will fit within the datatype of the computer.
As pointed out, this needs to use a floating point representation, and whilst will not fail due to overflow, can still fail when n/(n+1)
is smaller than the accuracy of the floating point fraction part.
Update
From incremental average
There is a better reorganization.
Mi - Average
Average2 = Average + -------------
n + 1
It is better, because it only has one division.
You can reduce potential for overflow by adding to sum
the value dt/nSamples
while making sure that you don't lose dt%nSamples
.
template <typename sampleunit_t>
static inline ULONGLONG AveragePerformanceClocks (void (*f)(),
sampleunit_t nSamples)
{
ULONGLONG delta = 0;
ULONGLONG sum = 0;
sampleunit_t i;
for (i = 0; i < nSamples; ++i) {
LARGE_INTEGER t1;
LARGE_INTEGER t2;
ULONGLONG dt;
QueryPerformanceCounter(&t1);
f();
QueryPerformanceCounter(&t2);
dt = t2.QuadPart - t1.QuadPart;
// Reduce the potential for overflow.
delta += (dt%nSamples);
sum += (dt/nSamples);
sum += (delta/nSamples);
delta = (delta%nSamples);
}
return sum;
}
To prevent an overflow of a sum value in a calculation, you can normalize the base values:
Let's say that your input is data is:
20
20
20
20
20
The sum
would be 100, the average
20 and the count
5.
If now a new value, 30, would be added and I would be using a 7 bits integer as value to store the sum
in, you would hit the overflow and have an issue.
The trick is to normalize:
- Take the new value, 30, and divide it by the
average
value, let's call itnew_val_norm
- Take the
average
value and divide it by theaverage
(so 1.000), and multiply by thecount
, let's call thisavg_norm
- Add the new value
new_val_norm
to theavg_norm
value, divide by thecount
+1 (we just added one extra value), and multiply byaverage
to get the new avg value.
The risk of overflow is then pushed away for the sum since it is just not used anymore.
If the avg * count (avg_norm
) is still to large, you can also opt to divide the new value by avg and count, and adding 1 to that.
来源:https://stackoverflow.com/questions/45765087/how-to-avoid-the-potential-of-an-overflow-when-computing-an-average-of-times