How to use warm_start

后端 未结 4 1337
栀梦
栀梦 2020-12-25 15:51

I\'d like to use the warm_start parameter to add training data to my random forest classifier. I expected it to be used like this:

clf = RandomF         


        
4条回答
  •  旧时难觅i
    2020-12-25 15:55

    The basic pattern of (taken from Miriam's answer):

    clf = RandomForestClassifier(warm_start=True)
    clf.fit(get_data())
    clf.fit(get_more_data())
    

    would be the correct usage API-wise.

    But there is an issue here.

    As the docs say the following:

    When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest.

    it means, that the only thing warm_start can do for you, is adding new DecisionTree's. All the previous trees seem to be untouched!

    Let's check this with some sources:

      n_more_estimators = self.n_estimators - len(self.estimators_)
    
        if n_more_estimators < 0:
            raise ValueError('n_estimators=%d must be larger or equal to '
                             'len(estimators_)=%d when warm_start==True'
                             % (self.n_estimators, len(self.estimators_)))
    
        elif n_more_estimators == 0:
            warn("Warm-start fitting without increasing n_estimators does not "
                 "fit new trees.")
    

    This basically tells us, that you would need to increase the number of estimators before approaching a new fit!

    I have no idea what kind of usage sklearn expects here. I'm not sure, if fitting, increasing internal variables and fitting again is correct usage, but i somehow doubt it (especially as n_estimators is not a public class-variable).

    Your basic approach (in regards to this library and this classifier) is probably not a good idea for your out-of-core learning here! I would not pursue this further.

提交回复
热议问题