LinearSVC Feature Selection returns different coef_ in Python

自闭症网瘾萝莉.ら 提交于 2021-01-29 10:19:01

问题


I'm using SelectFromModel with a LinearSVC on a training data set. The training and testing set had been already split and are saved in separate files. When I fit the LinearSVC on the training set I get a set of coef_[0] which I try to find the most important features. When I rerun the script i get different coef_[0] values even though it is on the same training data. Why is this the case?

See below for snip of code (maybe there's a bug I don't see):

fig = plt.figure()

#SelectFromModel
lsvc = LinearSVC(C=.01, penalty="l1", dual= False).fit(X_train, Y_train.values.ravel())
X_trainPro = SelectFromModel(lsvc,prefit=True)
sscores = lsvc.coef_[0]
print(sscores)
ax = fig.add_subplot(1, 1, 1)

for i in range(len(sscores)):
    sscores[i] = np.abs(sscores[i])

sscores_sum = 0
for i in range(len(sscores)):
    sscores_sum = sscores_sum + sscores[i]

for i in range(len(sscores)):
    sscores[i] = sscores[i] / sscores_sum

stemp = sscores.copy()
total_weight = 0
feature_numbers = 0
while (total_weight <= .9):
    total_weight = total_weight + stemp.max()
    stemp[np.nonzero(stemp == stemp.max())[0][0]] = 0
    feature_numbers += 1

print(total_weight, feature_numbers)

stemp = sscores.copy()
sfeaturenames = np.array([])
orderScore = np.array([])
for i in range(len(sscores)):
    sfeaturenames = np.append(sfeaturenames, X_train.columns[np.nonzero(stemp == stemp.max())[0][0]])
    orderScore = np.append(orderScore, stemp.max())
    stemp[np.nonzero(stemp == stemp.max())[0][0]] = -1

lowscore = orderScore[feature_numbers]
smask1 = orderScore <= lowscore
smask2 = orderScore > lowscore
ax.bar(sfeaturenames[smask2],orderScore[smask2], align = "center", color = "green")
ax.bar(sfeaturenames[smask1],orderScore[smask1], align = "center", color = "blue")
ax.set_title("SelectFromModel")
ax.tick_params(labelrotation=90)

plt.subplots_adjust(hspace=2, bottom=.2, top= .85)
plt.show()

#selection of the top values to use
Top_Rank = np.array([])
scores = sscores

for i in range(feature_numbers):
    Top_item = scores.max()
    Top_item_loc = np.where(scores == np.max(scores))
    Top_Rank = np.append(Top_Rank,X_train.columns[Top_item_loc])
    scores[Top_item_loc] = 0
print(Top_Rank)
X_train = X_train[Top_Rank]
X_test = X_test[Top_Rank]

回答1:


Since you set dual=False, you should be getting the same coefficients. What is your sklearn version?

Run this and check if you get the same output:

from sklearn.svm import LinearSVC
from sklearn.datasets import make_classification

X, y = make_classification(n_features=4, random_state=0)
for i in range(10):
    lsvc = LinearSVC(C=.01, penalty="l1", dual= False).fit(X, y)
    sscores = lsvc.coef_[0]
    print(sscores)

The output should be exactly the same.

[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]
[0.         0.         0.27073732 0.        ]


来源:https://stackoverflow.com/questions/58404743/linearsvc-feature-selection-returns-different-coef-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!