I was implementing an algorithm to calculate natural logs in C.
double taylor_ln(int z) {
double sum = 0.0;
double tmp = 1.0;
int i = 1;
whi
Plenty of discussion of the cause, but here's an alternative solution:
double taylor_ln(int z)
{
double sum = 0.0;
double tmp, old_sum;
int i = 1;
do
{
old_sum = sum;
tmp = (1.0 / i) * (pow(((z - 1.0) / (z + 1.0)), i));
printf("(1.0 / %d) * (pow(((%d - 1.0) / (%d + 1.0)), %d)) = %f\n",
i, z, z, i, tmp);
sum += tmp;
i += 2;
} while (sum != old_sum);
return sum * 2;
}
This approach focuses on whether each decreasing value of tmp makes a tangible difference to sum. It's easier than working out some threshold from 0 at which tmp becomes insignificant, and probably terminates earlier without changing the result.
Note that when you sum a relatively big number with a relatively small one, the significant digits in the result limit the precision. By way of contrast, if you sum several small ones then add that to the big one, you may then have enough to bump the big one up a little. In your algorithm small tmp values weren't being summed with each other anyway, so there's no accumulation unless each actually affects sum - hence the approach above works without further compromising precision.