scipy.optimize.leastsq returns best guess parameters not new best fit

三世轮回 提交于 2019-12-23 12:24:37

问题


I want to fit a lorentzian peak to a set of data x and y, the data is fine. Other programs like OriginLab fit it perfectly, but I wanted to automate the fitting with python so I have the below code which is based on http://mesa.ac.nz/?page_id=1800

The problem I have is that the scipy.optimize.leastsq returns as the best fit the same initial guess parameters I passed to it, essentially doing nothing. Here is the code.

#x, y are the arrays with the x,y  axes respectively
#defining funcitons
def lorentzian(x,p):
  return p[2]*(p[0]**2)/(( x - (p[1]) )**2 + p[0]**2)

def residuals(p,y,x):
  err = y - lorentzian(x,p)
  return err      

p = [0.055, wv[midIdx], y[midIdx-minIdx]]   
pbest = leastsq(residuals, p, args=(y, x), full_output=1)
best_parameters = pbest[0]
print p
print pbest

p are the initial guesses and best_parameters are the returned 'best fit' parameters from leastsq, but they are always the same.

this is what returned by the full_output=1 (the long numeric arrays have been shortened but are still representitive)

    [0.055, 855.50732, 1327.0]
    (array([  5.50000000e-02,   8.55507324e+02,   1.32700000e+03]), 
    None, {'qtf':array([ 62.05192947,  69.98033905,  57.90628052]), 
    'nfev': 4, 
    'fjac': array([[-0.,  0.,  0., 0.,  0.,  0.,  0.,],
    [ 0., -0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
    0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
    0.,  0.],
    [ 0.,  0., -0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
    0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
    0.,  0.]]), 
    'fvec': array([  62.05192947,   69.98033905,   
    53.41218567,   45.49879837,   49.58242035,   36.66483688,
    34.74443436,   50.82238007,   34.89669037]), 
    'ipvt': array([1, 2, 3])},  
    'The cosine of the angle between func(x) and any column of the\n  Jacobian 
    is at most 0.000000 in absolute value', 4)

can anyone see whats wrong?


回答1:


A quick google search hints at a problem with the data being single precision (your other programs almost certainly upcast to double precision too, though this explicitely is a problem with scipy as well, see also this bug report). If you look at your full_output=1 result, you see the the Jacobian is approximated as zero everywhere. So giving the Jacobian explicitely might help (though even then you might want to upcast, because the minimum precision for a relative error you can get with single precision is just very limited).

Solution: the easiest and numerically best solution (of course giving the real Jacobian is also a bonus) is to just cast your x and y data to double precision (x = x.astype(np.float64) will do for example).

I would not suggest this, but you also may be able to fix it by setting epsfcn keyword argument (and also the tolerance keyword arguments probably) by hand, something along epsfcn=np.finfo(np.float32).eps. This seems to fix the issue in a way, but (since most calculations are with scalars, and scalars do not force an upcast in your calculation) the calculations are done in float32 and the precision loss seem to be rather big, at least when not providing Dfunc.



来源:https://stackoverflow.com/questions/12473406/scipy-optimize-leastsq-returns-best-guess-parameters-not-new-best-fit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!