Product feature optimization with constraints

心已入冬 提交于 2021-01-07 04:12:56

问题


I have trained a Lightgbm model on learning to rank dataset. The model predicts relevance score of a sample. So higher the prediction the better it is. Now that the model has learned I would like to find the best values of some features that gives me the highest prediction score.

So, lets say I have features u,v,w,x,y,z and the features I would like to optimize over are x,y,z.

maximize f(u,v,w,x,y,z) w.r.t features x,y,z where f is a lightgbm model
subject to constraints : 
y = Ax + b
z = 4 if y < thresh_a else 4-0.5 if y >= thresh_b  else 4-0.3
thresh_m < x <= thresh_n

The numbers are randomly made up but constraints are linear.

Objective function with respect to x looks like the following :

So the function is very spiky, non-smooth. I also don't have the gradient information as f is a lightgbm model.

Using Nathan's answer I wrote down the following class :

class ProductOptimization:
    
    def __init__(self, estimator,  features_to_change, row_fixed_values, 
                 bnds=None):
        self.estimator = estimator 
        self.features_to_change = features_to_change
        self.row_fixed_values = row_fixed_values
        self.bounds = bnds
                    
    def get_sample(self, x):
        
        new_values = {k:v for k,v in zip(self.features_to_change, x)}
        
        return self.row_fixed_values.replace({k:{self.row_fixed_values[k].iloc[0]:v} 
                                  for k,v in new_values.items()})
    
    def _call_model(self, x):
        
        pred = self.estimator.predict(self.get_sample(x))
        return pred[0]
    
    def constraint1(self, vector):
        x = vector[0]
        y = vector[2]
        return # some float value
    
    def constraint2(self, vector):
        x = vector[0]
        y = vector[3] 
        return #some float value
    
    def optimize_slsqp(self, initial_values):
        
        con1 = {'type': 'eq', 'fun': self.constraint1}
        con2 = {'type': 'eq', 'fun': self.constraint2}
        cons = ([con1,con2])

        
        result = minimize(fun=self._call_model,
                          x0=np.array(initial_values),
                          method='SLSQP',
                          bounds=self.bounds,
                          constraints=cons)
        return result

The results that I get are always around the initial guess. And I think its because of non-smoothness of the function and absence of any gradient information which is important for the SLSQP optimizer. Any advices how should I deal with this kind of problem ?


回答1:


It's been a good minute since I last wrote some serious code, so I appologize if it's not entirely clear what everything does, please feel free to ask for more explanations

The imports:

from sklearn.ensemble import GradientBoostingRegressor
import numpy as np
from scipy.optimize import minimize
from copy import copy

First I define a new class that allows me to easily redefine values. This class has 5 inputs:

  1. value: this is the 'base' value. In your equation y=Ax + b it's the b part
  2. minimum: this is the minimum value this type will evaluate as
  3. maximum: this is the maximum value this type will evaluate as
  4. multipliers: the first tricky one. It's a list of other InputType objects. The first is the input type and the second the multiplier. In your example y=Ax +b you would have [[x, A]], if the equation was y=Ax + Bz + Cd it would be [[x, A], [z, B], [d, C]]
  5. relations: the most tricky one. It's also a list of other InputType objects, it has four items: the first is the input type, the second defines if it's an upper boundary you use min, if it's a lower boundary you use max. The third item in the list is the value of the boundary, and the fourth the output value connected to it

Watch out if you define your input values too strangely I'm sure there's weird behaviour.

class InputType:

    def __init__(self, value=0, minimum=-1e99, maximum=1e99, multipliers=[], relations=[]):
        """

        :param float value: base value
        :param float minimum: value can never be lower than x
        :param float maximum: value can never be higher than y
        :param multipliers: [[InputType, multiplier], [InputType, multiplier]]
        :param relations: [[InputType, min, threshold, output_value], [InputType, max, threshold, output_value]]
        """
        self.val = value
        self.min = minimum
        self.max = maximum
        self.multipliers = multipliers
        self.relations = relations

    def reset_val(self, value):
        self.val = value

    def evaluate(self):
        """
        - relations to other variables are done first if there are none then the rest is evaluated

        - at most self.max
        - at least self.min
        - self.val + i_x * w_x
        i_x is input i, w_x is multiplier (weight) of i
        """
        for term, min_max, value, output_value in self.relations:
            # check for each term if it falls outside of the expected terms
            if min_max(term.evaluate(), value) != term.evaluate():
                return self.return_value(output_value)

        output_value = self.val + sum([i[0].evaluate() * i[1] for i in self.multipliers])
        return self.return_value(output_value)

    def return_value(self, output_value):
        return min(self.max, max(self.min, output_value))

Using this, you can fix the input types sent from the optimizer, as shown in _call_model:

class Example:

    def __init__(self, lst_args):
        self.lst_args = lst_args

        self.X = np.random.random((10000, len(lst_args)))
        self.y = self.get_y()
        self.clf = GradientBoostingRegressor()
        self.fit()

    def get_y(self):
        # sum of squares, is minimum at x = [0, 0, 0, 0, 0 ... ]
        return np.array([[self._func(i)] for i in self.X])

    def _func(self, i):
        return sum(i * i)

    def fit(self):
        self.clf.fit(self.X, self.y)

    def optimize(self):
        x0 = [0.5 for i in self.lst_args]
        initial_simplex = self._get_simplex(x0, 0.1)
        result = minimize(fun=self._call_model,
                          x0=np.array(x0),
                          method='Nelder-Mead',
                          options={'xatol': 0.1,
                                   'initial_simplex': np.array(initial_simplex)})
        return result

    def _get_simplex(self, x0, step):
        simplex = []
        for i in range(len(x0)):
            point = copy(x0)
            point[i] -= step
            simplex.append(point)

        point2 = copy(x0)
        point2[-1] += step
        simplex.append(point2)
        return simplex

    def _call_model(self, x):
        print(x, type(x))
        for i, value in enumerate(x):
            self.lst_args[i].reset_val(value)

        input_x = np.array([i.evaluate() for i in self.lst_args])
        prediction = self.clf.predict([input_x])
        return prediction[0]

I can define your problem as shown below (be sure to define the inputs in the same order as the final list, otherwise not all the values will get updated correctly in the optimizer!):

A = 5
b = 2
thresh_a = 5
thresh_b = 10
thresh_c = 10.1
thresh_m = 4
thresh_n = 6

u = InputType()
v = InputType()
w = InputType()
x = InputType(minimum=thresh_m, maximum=thresh_n)
y = InputType(value = b, multipliers=([[x, A]]))
z = InputType(relations=[[y, max, thresh_a, 4], [y, min, thresh_b, 3.5], [y, max, thresh_c, 3.7]])


example = Example([u, v, w, x, y, z])

Calling the results:

result = example.optimize()
for i, value in enumerate(result.x):
    example.lst_args[i].reset_val(value)
print(f"final values are at: {[i.evaluate() for i in example.lst_args]}: {result.fun)}")


来源:https://stackoverflow.com/questions/65382539/product-feature-optimization-with-constraints

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!