Making Random Forest outputs like Logistic Regression

拥有回忆 提交于 2019-12-11 10:13:26

问题


I am asking dimensional wise etc. I am trying to implement this amazing work with random forest https://www.kaggle.com/allunia/how-to-attack-a-machine-learning-model/notebook

Both logistic regression and random forest are from sklearn but when I get weights from random forest model its (784,) while the logistic regression returns (10,784)

My most problems are mainly dimension and NaN, infinity or a value too large for dtype errors with attack methods. The weights using logical regression is (10,784) but with Random Forest its (784,) may be this caused the problem? Or can you suggest some modifications to attack methods? I tried Imputer for NaN values error but it wanted me to reshape so I've got this. I tried applying np.mat for the dimension errors I'm getting but they didnt work.

def non_targeted_gradient(target, output, w):
    target = target.reshape(1, -1)
    output = output.reshape(1, -1)
    w = w.reshape(1,-1)
    target = imp.fit_transform(target)
    output = imp.fit_transform(output)
    w = imp.fit_transform(w)
    ww = calc_output_weighted_weights(output, w)
    for k in range(len(target)):
        if k == 0:
            gradient = np.mat((1-target[k])) * np.mat((w[k]-ww))
        else:
            gradient += np.mat((1-target[k])) * np.mat((w[k]-ww))
    return gradient

I'm probably doing lots of things wrong but the TL;DR is I'm trying to apply Random Forest instead of Logistic regression at the link above.

Edit:

I've added a wrapper for randomforestclassifier:

class RandomForestWrapper(RandomForestClassifier):
    def fit(self, *args, **kwargs):
        super(RandomForestClassifierWithCoef, self).fit(*args, **kwargs)
        self.coef_ = self.feature_importances_

来源:https://stackoverflow.com/questions/53697980/making-random-forest-outputs-like-logistic-regression

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!