Sklearn SGDClassifier partial fit

前端 未结 1 1767
清歌不尽
清歌不尽 2020-12-22 19:21

I\'m trying to use SGD to classify a large dataset. As the data is too large to fit into memory, I\'d like to use the partial_fit method to train the classifier. I

相关标签:
1条回答
  • 2020-12-22 19:56

    I have finally found the answer. You need to shuffle the training data between each iteration, as setting shuffle=True when instantiating the model will NOT shuffle the data when using partial_fit (it only applies to fit). Note: it would have been helpful to find this information on the sklearn.linear_model.SGDClassifier page.

    The amended code reads as follows:

    from sklearn.linear_model import SGDClassifier
    import random
    clf2 = SGDClassifier(loss='log') # shuffle=True is useless here
    shuffledRange = range(len(X))
    n_iter = 5
    for n in range(n_iter):
        random.shuffle(shuffledRange)
        shuffledX = [X[i] for i in shuffledRange]
        shuffledY = [Y[i] for i in shuffledRange]
        for batch in batches(range(len(shuffledX)), 10000):
            clf2.partial_fit(shuffledX[batch[0]:batch[-1]+1], shuffledY[batch[0]:batch[-1]+1], classes=numpy.unique(Y))
    
    0 讨论(0)
提交回复
热议问题