Scikit-Learn's Pipeline: A sparse matrix was passed, but dense data is required

后端 未结 5 2113
傲寒
傲寒 2020-12-07 19:04

I\'m finding it difficult to understand how to fix a Pipeline I created (read: largely pasted from a tutorial). It\'s python 3.4.2:

df = pd.DataFrame
df = Da         


        
相关标签:
5条回答
  • 2020-12-07 19:26

    Unfortunately those two are incompatible. A CountVectorizer produces a sparse matrix and the RandomForestClassifier requires a dense matrix. It is possible to convert using X.todense(). Doing this will substantially increase your memory footprint.

    Below is sample code to do this based on http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html which allows you to call .todense() in a pipeline stage.

    class DenseTransformer(TransformerMixin):
    
        def fit(self, X, y=None, **fit_params):
            return self
    
        def transform(self, X, y=None, **fit_params):
            return X.todense()
    

    Once you have your DenseTransformer, you are able to add it as a pipeline step.

    pipeline = Pipeline([
         ('vectorizer', CountVectorizer()), 
         ('to_dense', DenseTransformer()), 
         ('classifier', RandomForestClassifier())
    ])
    

    Another option would be to use a classifier meant for sparse data like LinearSVC.

    from sklearn.svm import LinearSVC
    pipeline = Pipeline([('vectorizer', CountVectorizer()), ('classifier', LinearSVC())])
    
    0 讨论(0)
  • 2020-12-07 19:35

    you can change pandas Series to arrays using the .values method.

    pipeline.fit(df[0].values, df[1].values)
    

    However I think the issue here happens because CountVectorizer() returns a sparse matrix by default, and cannot be piped to the RF classifier. CountVectorizer() does have a dtype parameter to specify the type of array returned. That said usually you need to do some sort of dimensionality reduction to use random forests for text classification, because bag of words feature vectors are very long

    0 讨论(0)
  • 2020-12-07 19:40

    The most terse solution would be use a FunctionTransformer to convert to dense: this will automatically implement the fit, transform and fit_transform methods as in David's answer. Additionally if I don't need special names for my pipeline steps, I like to use the sklearn.pipeline.make_pipeline convenience function to enable a more minimalist language for describing the model:

    from sklearn.preprocessing import FunctionTransformer
    
    pipeline = make_pipeline(
         CountVectorizer(), 
         FunctionTransformer(lambda x: x.todense(), accept_sparse=True), 
         RandomForestClassifier()
    )
    
    0 讨论(0)
  • with this pipeline add TfidTransformer plus

            pipelinex = Pipeline([('bow',vectorizer),
                               ('tfidf',TfidfTransformer()),
                               ('to_dense', DenseTransformer()), 
                               ('classifier',classifier)])
    
    0 讨论(0)
  • 2020-12-07 19:52

    Random forests in 0.16-dev now accept sparse data.

    0 讨论(0)
提交回复
热议问题