I\'m trying to use scipy.optimize
functions to find a global minimum of a complicated function with several arguments. scipy.optimize.minimize
seem
The argument you are looking for is: constraints
which is one of the arguments passed to scipy.minimize
. Roll your own lambda function that receives the parameters to constrain like this:
#A function to define the space where scipy.minimize should
#confine its search:
def apply_sum_constraint(inputs):
#return value must come back as 0 to be accepted
#if return value is anything other than 0 it's rejected
#as not a valid answer.
total = 50.0 - np.sum(inputs)
return total
my_constraints = ({'type': 'eq', "fun": apply_sum_constraint })
result = spo.minimize(f,
guess,
method='SLSQP',
args=(a, b, c),
bounds=((-1.0, 1.0), (-1.0, 1.0)),
options={'disp': True},
constraints=my_constraints)
The above example asserts that all the new candidates in the neighborhood of the last searched item better add up to 50. Change that method to define the permissible search space and the scipy.minimize function will waste no energy considering those answers.
The Nelder-Mead solver doesn't support constrained optimization, but there are several others that do.
TNC and L-BFGS-B both support only bound constraints (e.g. x[0] >= 0
), which should be fine for your case. COBYLA and SLSQP are more flexible, supporting any combination of bounds, equality and inequality-based constraints.
You can find more detailed info about the solvers by looking at the docs for the standalone functions, e.g. scipy.optimize.fmin_slsqp for method='SLSQP'
.
You can see my previous answer here for an example of constrained optimization using SLSQP.
The minimize
function has a bounds parameter which can be used to restrict the bounds for each variable when using the L-BFGS-B, TNC, COBYLA or SLSQP methods.
For example,
import scipy.optimize as optimize
fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
res = optimize.minimize(fun, (2, 0), method='TNC', tol=1e-10)
print(res.x)
# [ 1. 2.49999999]
bnds = ((0.25, 0.75), (0, 2.0))
res = optimize.minimize(fun, (2, 0), method='TNC', bounds=bnds, tol=1e-10)
print(res.x)
# [ 0.75 2. ]
I know this is late in the game, but maybe have a look at mystic
. You can apply arbitrary python functions as penalty functions, or apply bounds constraints, and more… on any optimizer (including the algorithm from scipy.optimize.fmin
).
https://github.com/uqfoundation/mystic