Parallel many dimensional optimization

后端 未结 7 2100
萌比男神i
萌比男神i 2021-02-07 20:48

I am building a script that generates input data [parameters] for another program to calculate. I would like to optimize the resulting data. Previously I have been using the num

7条回答
  •  情书的邮戳
    2021-02-07 21:16

    There are two ways of estimating gradients, one easily parallelizable, one not:

    • around a single point, e.g. (f( x + h directioni ) - f(x)) / h; this is easily parallelizable up to Ndim
    • "walking" gradient: walk from x0 in direction e0 to x1, then from x1 in direction e1 to x2 ...; this is sequential.

    Minimizers that use gradients are highly developed, powerful, converge quadratically (on smooth enough functions). The user-supplied gradient function can of course be a parallel-gradient-estimator.
    A few minimizers use "walking" gradients, among them Powell's method, see Numerical Recipes p. 509.
    So I'm confused: how do you parallelize its inner loop ?

    I'd suggest scipy fmin_tnc with a parallel-gradient-estimator, maybe using central, not one-sided, differences.
    (Fwiw, this compares some of the scipy no-derivative optimizers on two 10-d functions; ymmv.)

提交回复
热议问题