GradientBoostingClassifier with a BaseEstimator in scikit-learn?

心不动则不痛 提交于 2019-11-30 13:45:06
Santosh

An improved version of iampat's answer and slight modification of scikit-developers's answer should do the trick.

class init:
    def __init__(self, est):
        self.est = est
    def predict(self, X):
        return self.est.predict_proba(X)[:,1][:,numpy.newaxis]
    def fit(self, X, y):
        self.est.fit(X, y)

As suggested by scikit-learn developers, the problem can be solved by using an adaptor like this:

def __init__(self, est):
   self.est = est
def predict(self, X):
    return self.est.predict_proba(X)[:, 1]
def fit(self, X, y):
    self.est.fit(X, y)

Here is a complete and, in my opinion, simpler version of iampat's code snippet.

    class RandomForestClassifier_compability(RandomForestClassifier):
        def predict(self, X):
            return self.predict_proba(X)[:, 1][:,numpy.newaxis]
    base_estimator = RandomForestClassifier_compability()
    classifier = GradientBoostingClassifier(init=base_estimator)

Gradient Boosting generally requires the base learner to be an algorithm that performs numeric prediction, not classification. I assume that is your issue.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!