Very fast approximate Logarithm (natural log) function in C++?

痞子三分冷 提交于 2020-07-05 02:52:07

问题


We find various tricks to replace std::sqrt (Timing Square Root) and some for std::exp (Using Faster Exponential Approximation) , but I find nothing to replace std::log.

It's part of loops in my program and its called multiple times and while exp and sqrt were optimized, Intel VTune now suggest me to optimize std::log, after that it seems that only my design choices will be limiting.

For now I use a 3rd order taylor approximation of ln(1+x) with x between -0.5 and +0.5 (90% of the case for max error of 4%) and fall back to std::log otherwise. This gave me 15% speed-up.


回答1:


Prior to embarking on the design and deployment of a customized implementation of a transcendental function for performance, it is highly advisable to pursue optimizations at the algorithmic level, as well as through the toolchain. Unfortunately, we do not have any information about the code to be optimized here, nor do we have information on the toolchain.

At the algorithmic level, check whether all calls to transcendental functions are truly necessary. Maybe there is a mathematical transformation that requires fewer function calls, or converts transcendental functions into algebraic operation. Are any of the transcendental function calls possibly redundant, e.g. because the computation is unnecessarily switching in and out of logarithmic space? If the accuracy requirements are modest, can the whole computation be performed in single precision, using float instead of double throughout? On most hardware platforms, avoiding double computation can lead to a significant performance boost.

Compilers tend to offer a variety of switches that affect the performance of numerically intensive code. In addition to increasing the general optimization level to -O3, there is often a way to turn off denormal support, i.e. turn on flush-to-zero, or FTZ, mode. This has performance benefits on various hardware platforms. In addition, there is often a "fast math" flag whose use results in slightly reduced accuracy and eliminates overhead for handling special cases such as NaNs and infinities, plus the handling of errno. Some compilers also support auto-vectorization of code and ship with a SIMD math library, for example the Intel compiler.

A custom implementation of a logarithm function typically involves separating a binary floating-point argument x into an exponent e and a mantissa m, such that x = m * 2e, therefore log(x) = log(2) * e + log(m). m is chosen such that it is close to unity, because this provides for efficient approximations, for example log(m) = log(1+f) = log1p(f) by minimax polynomial approximation.

C++ provides the frexp() function to separate a floating-point operand into mantissa and exponent, but in practice one typically uses faster machine-specific methods that manipulate floating-point data at the bit level by re-interpreting them as same-size integers. The code below for the single-precision logarithm, logf(), demonstrates both variants. Functions __int_as_float() and __float_as_int() provide for the reinterpretation of an int32_t into an IEEE-754 binary32 floating-point number and vice-versa. This code heavily relies on the fused multiply-add operation FMA supported directly in the hardware on most current processors, CPU or GPU. On platforms where fmaf() maps to software emulation, this code will be unacceptably slow.

#include <cmath>
#include <cstdint>

/* compute natural logarithm, maximum error 0.85756 ulps */
float my_logf (float a)
{
    float m, r, s, t, i, f;
    int32_t e;

    if ((a > 0.0f) && (a <= 3.40282347e+38f)) { // 0x1.fffffep+127
#if PORTABLE
        m = frexpf (a, &e);
        if (m < 0.666666667f) {
            m = m + m;
            e = e - 1;
        }
        i = (float)e;
#else // PORTABLE
        i = 0.0f;
        /* fix up denormal inputs */
        if (a < 1.175494351e-38f){ // 0x1.0p-126
            a = a * 8388608.0f; // 0x1.0p+23
            i = -23.0f;
        }
        e = (__float_as_int (a) - 0x3f2aaaab) & 0xff800000;
        m = __int_as_float (__float_as_int (a) - e);
        i = fmaf ((float)e, 1.19209290e-7f, i); // 0x1.0p-23
#endif // PORTABLE
        /* m in [2/3, 4/3] */
        f = m - 1.0f;
        s = f * f;
        /* Compute log1p(f) for f in [-1/3, 1/3] */
        r = fmaf (-0.130187988f, f, 0.140889585f); // -0x1.0aa000p-3, 0x1.208ab8p-3
        t = fmaf (-0.121489584f, f, 0.139809534f); // -0x1.f19f10p-4, 0x1.1e5476p-3
        r = fmaf (r, s, t);
        r = fmaf (r, f, -0.166845024f); // -0x1.55b2d8p-3
        r = fmaf (r, f,  0.200121149f); //  0x1.99d91ep-3
        r = fmaf (r, f, -0.249996364f); // -0x1.fffe18p-3
        r = fmaf (r, f,  0.333331943f); //  0x1.5554f8p-2
        r = fmaf (r, f, -0.500000000f); // -0x1.000000p-1
        r = fmaf (r, s, f);
        r = fmaf (i, 0.693147182f, r); //   0x1.62e430p-1 // log(2) 
    } else {
        r = a + a;  // silence NaNs if necessary
        if (a  < 0.0f) r =  0.0f / 0.0f; //  NaN
        if (a == 0.0f) r = -1.0f / 0.0f; // -Inf
    }
    return r;
}

As noted in the code comment, the implementation above provides faithfully-rounded single-precision results, and it deals with exceptional cases consistent with the IEEE-754 floating-point standard. The performance can be increased further by eliminating special-case support, eliminating the support for denormal arguments, and reducing the accuracy. This leads to the following exemplary variant:

/* natural log on [0x1.f7a5ecp-127, 0x1.fffffep127]. Maximum relative error 9.4529e-5 */
float my_faster_logf (float a)
{
    float m, r, s, t, i, f;
    int32_t e;

    e = (__float_as_int (a) - 0x3f2aaaab) & 0xff800000;
    m = __int_as_float (__float_as_int (a) - e);
    i = (float)e * 1.19209290e-7f; // 0x1.0p-23
    /* m in [2/3, 4/3] */
    f = m - 1.0f;
    s = f * f;
    /* Compute log1p(f) for f in [-1/3, 1/3] */
    r = fmaf (0.230836749f, f, -0.279208571f); // 0x1.d8c0f0p-3, -0x1.1de8dap-2
    t = fmaf (0.331826031f, f, -0.498910338f); // 0x1.53ca34p-2, -0x1.fee25ap-2
    r = fmaf (r, s, t);
    r = fmaf (r, s, f);
    r = fmaf (i, 0.693147182f, r); // 0x1.62e430p-1 // log(2) 
    return r;
}



回答2:


Take a look at this discussion, the accepted answer refers to an implementation of a function for computing logarithms based on the Zeckendorf decomposition.

In the comments in the implementation file there is a discussion about complexity and some tricks to reach O(1).

Hope this helps!




回答3:


#include <math.h>
#include <iostream>

constexpr int LogPrecisionLevel = 14;
constexpr int LogTableSize = 1 << LogPrecisionLevel;

double log_table[LogTableSize];

void init_log_table() {
    for (int i = 0; i < LogTableSize; i++) {
        log_table[i] = log2(1 + (double)i / LogTableSize);
    }
}

double fast_log2(double x) { // x>0
    long long t = *(long long*)&x;
    int exp = (t >> 52) - 0x3ff;
    int mantissa = (t >> (52 - LogPrecisionLevel)) & (LogTableSize - 1);
    return exp + log_table[mantissa];
}

int main() {
    init_log_table();

    double d1 = log2(100); //6.6438561897747244
    double d2 = fast_log2(100); //6.6438561897747244
    double d3 = log2(0.01); //-6.6438561897747244
    double d4 = fast_log2(0.01); //-6.6438919626096089
}



回答4:


It depends how accurate you need to be. Often log is called to get an idea of the magnitude of the number, which you can do essentially for free by examining the exponent field of a floating point number. That's also your first approximation. I'll put in a plug for my book "Basic Algorithms" which explains how to implement the standard library math functions from first principles.



来源:https://stackoverflow.com/questions/39821367/very-fast-approximate-logarithm-natural-log-function-in-c

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!