scipy.optimize.linprog unable to find a feasible starting point despite a feasible answer clearly exists

前端 未结 2 1605
既然无缘
既然无缘 2021-01-05 13:23

the vector k seems to satisfy all constraints. Is there something I\'m missing here? Thanks.

import numpy as np
from scipy.optimize import linprog
A_ub=[[0,          


        
2条回答
  •  长情又很酷
    2021-01-05 13:48

    It seems like a tolerance issue.

    I was able to "fix" it by importing the original linprog code, after I changed the tolerance (tol parameter) from 10e-12 to 10e-8 in the "private" method _linprog_simplex.

    This parameter is passed to the method _pivot_col, which reads

    ma = np.ma.masked_where(T[-1, :-1] >= -tol, T[-1, :-1], copy=False)
        if ma.count() == 0:
            return False, np.nan
        if bland:
            return True, np.where(ma.mask == False)[0][0]
        return True, np.ma.where(ma == ma.min())[0][0]
    

    This is why bland's rule passes the test, while the default one fails. I then tried to find if there is any default tolerance in the implementation of numpy.masked_where. From there, it is not obvious what is the tolerance that is used, but other numpy functions, such as masked_values, have an absolute tolerance of 10e-8 by default.

    I hope this helps.

    Here is the result I am getting by changing the tolerance in _linprog_simplex:

    True
    True
    True
      status: 0
       slack: array([  3610.,   6490.,  11840.,      0.,      0.,  14000.,  10100.,
                0.,  10000.,   5000.,  15450.,      0.,  13000.,      0.,
            10000.,   3000.,  11000.,      0.,  12220.,      0.,  10000.])
     success: True
         fun: -2683.6935269049141
           x: array([  1.22573363e+00,   2.00000000e+00,   1.22404780e+00,
             3.71739130e+00,   8.25688073e-02,   2.00000000e+03,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             5.00000000e+03,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   2.00000000e+03,
             6.39000000e+03,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   1.84000000e+03,
             5.00000000e+03,   0.00000000e+00,   1.00000000e+04,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   1.00000000e+02,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             5.45000000e+03,   0.00000000e+00,   3.00000000e+03,
             0.00000000e+00,   3.00000000e+03,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             0.00000000e+00,   0.00000000e+00,   0.00000000e+00,
             1.00000000e+03])
     message: 'Optimization terminated successfully.'
         nit: 26
    

    PS: I also had to change the line

    from .optimize import OptimizeResult, _check_unknown_options
    

    to

    from scipy.optimize import OptimizeResult
    

    and remove the call to _check_unknown_options in line 533 of the original code.

提交回复
热议问题