How approximation search works

后端 未结 2 1222
星月不相逢
星月不相逢 2020-11-22 04:20

[Prologue]

This Q&A is meant to explain more clearly the inner working of my approximations search class which I first publishe

2条回答
  •  野的像风
    2020-11-22 04:43

    a combination of secant (with bracketing, but see correction at the bottom) and bisection method is much better:

    we find root approximations by secants, and keep the root bracketed as in bisection.

    always keep the two edges of the interval so that the delta at one edge is negative, and at the other it is positive, so the root is guaranteed to be inside; and instead of halving, use the secant method.

    Pseudocode:

    given a function f
    given two points a, b, such that a < b and sign(f(a)) /= sign(f(b))
    given tolerance tol
    find root z of f such that abs(f(z)) < tol     -- stop_condition
    
    DO:
        x = root of f by linear interpolation of f between a and b
        m = midpoint between a and b
    
        if stop_condition holds at x or m, set z and STOP
    
        [a,b] := [a,x,m,b].sort.choose_shortest_interval_with_
                                       _opposite_signs_at_its_ends
    

    This obviously halves the interval [a,b], or does even better, at each iteration; so unless the function is extremely bad behaving (like, say, sin(1/x) near x=0), this will converge very quickly, taking only two evaluations of f at the most, for each iteration step.

    And we can detect the bad behaving cases by checking that b-a not becomes too small (esp. if we're working with finite precision, as in doubles).

    update: apparently this is actually double false position method, which is secant with bracketing, as described by the pseudocode above. Augmenting it by the middle point as in bisection ensures convergence even in the most pathological cases.

提交回复
热议问题