Jacobian in scipy.optimize.minimize form

杀马特。学长 韩版系。学妹 提交于 2020-01-24 12:04:10

问题


My goal is to optimize least squares ofth degree polynomial functios with some constraints, so my goal is to use scipy.optimize.minimize(...., method = 'SLSQP', ....). In optimalization, it is always good to pass Jacobian in the method. I am not sure, however, how to desing my 'jac' function.

My objective function is desinged like this:

def least_squares(args_pol, x, y):
    a, b, c, d, e = args_pol
    return ((y-(a*x**4 + b*x**3 + c*x**2 + d*x + e))**2).sum() 

where x and y are numpy arrays and contains the coordinates of points. I found in documentation, that 'jacobian' of scipy.ompitmize.minimize is gradient ob objective function and thus its array of first derivatives.

for args_pol its easy to find first derivatives, for example

db = (2*(a*x**4 + b*x**3 + c*x**2 + d*x + e - y)*x**3).sum()

but for each [x_i] in my numpy.array x is derivative

dx_i = 2*(a*x[i]**4 + b*x[i]**3 + c*x[i]**2 + d*x[i] + e - y[i])*
       (4*a*x[i]**3 + 3*b*x[i]**2 + 2*c*x[i] + d)

and so on for each y_i. Thus, reasonable way is to compute each derivative as numpy.array dx and dy.

My question is - what form of result should my function for gradient return? For example should it look like

return np.array([[da, db, dc, dd, de], [dx[1], dx[2], .... dx[len(x)-1]], 
                 [dy[1], dy[2],..........dy[len(y)-1]]])

or should it look like

return np.array([da, db, dc, dd, de, dx, dy])

Thanks for any explanations.

来源:https://stackoverflow.com/questions/47551885/jacobian-in-scipy-optimize-minimize-form

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!