Python scipy.optimize.fmin_l_bfgs_b error occurs

六眼飞鱼酱① 提交于 2019-12-02 00:02:13

You need to be taking the derivative of func with respect to each of the elements in your concatenated array of alpha, beta, w, gamma parameters, so func_grad ought to return a single 1D array of the same length as x0 (i.e. 22). Instead it returns a jumble of two arrays and two scalar floats nested inside an np.object array:

In [1]: func_grad(x0, X, Y, Z)
Out[1]: 
array([array([ 0.00681272,  0.00681272,  0.00681272,  0.00681272]),
       0.006684719133999417,
       array([-0.01351227, -0.01351227, -0.01351227, -0.01351227]),
       -0.013639910534587798], dtype=object)

Part of the problem is that np.array([d_f_a, d_f_b,d_f_w,d_f_g]) is not concatenating those objects into a single 1D array since some are numpy arrays and some are Python floats. That part is easily solved by using np.hstack([d_f_a, d_f_b,d_f_w,d_f_g]) instead.

However, the combined sizes of these objects is still only 10, whereas the output of func_grad needs to be a 22-long vector. You will need to take another look at your df_* functions. In particular, W is a (3, 4) array, but df_w only returns a (4,) vector, and gamma is a (4,) vector whereas df_gamma only returns a scalar.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!