efficient Term Document Matrix with NLTK

后端 未结 3 1906
温柔的废话
温柔的废话 2020-12-24 04:15

I am trying to create a term document matrix with NLTK and pandas. I wrote the following function:

def fnDTM_Corpus(xCorpus):
    import pandas as pd
    \'         


        
3条回答
  •  不思量自难忘°
    2020-12-24 05:11

    Thanks to Radim and Larsmans. My objective was to have a DTM like the one you get in R tm. I decided to use scikit-learn and partly inspired by this blog entry. This the code I came up with.

    I post it here in the hope that someone else will find it useful.

    import pandas as pd
    from sklearn.feature_extraction.text import CountVectorizer 
    
    def fn_tdm_df(docs, xColNames = None, **kwargs):
        ''' create a term document matrix as pandas DataFrame
        with **kwargs you can pass arguments of CountVectorizer
        if xColNames is given the dataframe gets columns Names'''
    
        #initialize the  vectorizer
        vectorizer = CountVectorizer(**kwargs)
        x1 = vectorizer.fit_transform(docs)
        #create dataFrame
        df = pd.DataFrame(x1.toarray().transpose(), index = vectorizer.get_feature_names())
        if xColNames is not None:
            df.columns = xColNames
    
        return df
    

    to use it on a list of text in a directory

    DIR = 'C:/Data/'
    
    def fn_CorpusFromDIR(xDIR):
        ''' functions to create corpus from a Directories
        Input: Directory
        Output: A dictionary with 
                 Names of files ['ColNames']
                 the text in corpus ['docs']'''
        import os
        Res = dict(docs = [open(os.path.join(xDIR,f)).read() for f in os.listdir(xDIR)],
                   ColNames = map(lambda x: 'P_' + x[0:6], os.listdir(xDIR)))
        return Res
    

    to create the dataframe

    d1 = fn_tdm_df(docs = fn_CorpusFromDIR(DIR)['docs'],
              xColNames = fn_CorpusFromDIR(DIR)['ColNames'], 
              stop_words=None, charset_error = 'replace')  
    

提交回复
热议问题