Find the root of a multivariable equation using scipy.optimize

若如初见. 提交于 2019-12-11 07:36:47

问题


I have the following functions

import numpy as np
import scipy.optimize as optimize

def x(theta1, theta2, w, h, L1, L2):
    sint1 = np.sin(theta1)
    cost1 = np.cos(theta1)
    sint2 = np.sin(theta2)
    cost2 = np.cos(theta2)

    i1 = L1 * (cost1 + cost2) + w
    j1 = L1 * (sint1 - sint2) - h
    D = np.sqrt((L1*(cost2-cost1)+w)**2+(L1*(sint2-sint1)+h)**2)
    a = (0.25)*np.sqrt((4*L2**2-D**2)*D**2)

    return i1/2 + 2*j1*a/(D**2)

def y(theta1, theta2, w, h, L1, L2):
    sint1 = np.sin(theta1)
    cost1 = np.cos(theta1)
    sint2 = np.sin(theta2)
    cost2 = np.cos(theta2)

    i2 = L1 * (sint1 + sint2) + h
    j2 = L1 * (cost1 - cost2) - w
    D = np.sqrt((L1*(cost2-cost1)+w)**2+(L1*(sint2-sint1)+h)**2)
    a = (0.25)*np.sqrt((4*L2**2-D**2)*D**2)

    return i2/2 - 2*j2*a/(D**2)

def det_jacobiano(theta, w, h, L1, L2,eps):
    theta1,theta2 = theta
    dxdt1 = (-x(theta1+eps, theta2, w, h, L1, L2)+4*x(theta1, theta2, w, h, L1, L2)-3*x(theta1-eps, theta2, w, h, L1, L2))/(2*eps)
    dxdt2 = (-x(theta1, theta2+eps, w, h, L1, L2)+4*x(theta1, theta2, w, h, L1, L2)-3*x(theta1, theta2-eps, w, h, L1, L2))/(2*eps)
    dydt1 = (-y(theta1+eps, theta2, w, h, L1, L2)+4*y(theta1, theta2, w, h, L1, L2)-3*y(theta1-eps, theta2, w, h, L1, L2))/(2*eps)
    dydt2 = (-y(theta1, theta2+eps, w, h, L1, L2)+4*y(theta1, theta2, w, h, L1, L2)-3*y(theta1, theta2-eps, w, h, L1, L2))/(2*eps)  
    return dxdt1*dydt2 - dxdt2*dydt1

And I want to find the values of theta 1 and theta2 that make det_jacobiano 0. As you can see the function det_jacobiano is evaluated in the functions x and y.

When I try to use scipy.optimize to find the root

initial_guess = [2.693, 0.4538]
result = optimize.root(det_jacobiano, initial_guess,tol=1e-8,args=(20,0,100,100,1e-10),method='lm')

Obtengo el error: TypeError: Improper input: N=2 must not exceed M=1


回答1:


Root finding is the numerical computation equivalent of solving a system of equations. The same basic constraint applies: you need as many equations as you have unknowns.

All of the root finding routines in scipy expect the first parameter to be a function of N variables that returns N values. Essentially, that first parameter is meant to be equivalent to a system of N equations with N unknowns. Thus, your problem is that det_jacobiano takes 2 variables but only returns one value.

You can't use root finding methods with your current formulation, but you can still do minimization. Change the last line of det_jacobiano to:

return np.abs(dxdt1*dydt2 - dxdt2*dydt1)

and then use optimize.minimize:

result = optimize.minimize(det_jacobiano, initial_guess, tol=1e-8, args=(20,0,100,100,1e-10), method='Nelder-Mead')

Output:

 final_simplex: (array([[ 1.47062275, -3.46178428],
       [ 1.47062275, -3.46178428],
       [ 1.47062275, -3.46178428]]), array([ 0.,  0.,  0.]))
           fun: 0.0
       message: 'Optimization terminated successfully.'
          nfev: 330
           nit: 137
        status: 0
       success: True
             x: array([ 1.47062275, -3.46178428])

result.fun holds the final minimized value (which is indeed 0.0, like you wanted), and result.x holds the values of theta1, theta2 that produced that 0.0.



来源:https://stackoverflow.com/questions/49932428/find-the-root-of-a-multivariable-equation-using-scipy-optimize

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!