Pandas NLTK tokenizing “unhashable type: 'list'”

我们两清 提交于 2021-02-05 07:55:10

问题


Following this example: Twitter data mining with Python and Gephi: Case synthetic biology

CSV to: df['Country', 'Responses']

'Country'
Italy
Italy
France
Germany

'Responses' 
"Loren ipsum..."
"Loren ipsum..."
"Loren ipsum..."
"Loren ipsum..."
  1. tokenize the text in 'Responses'
  2. remove the 100 most common words (based on brown.corpus)
  3. identify the remaining 100 most frequent words

I can get through step 1 and 2, but get an error on step 3:

TypeError: unhashable type: 'list'

I believe it's because I'm working in a dataframe and have made this (likely erronous) modification:

Original example:

#divide to words
tokenizer = RegexpTokenizer(r'\w+')
words = tokenizer.tokenize(tweets)

My code:

#divide to words
tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)

My full code:

df = pd.read_csv('CountryResponses.csv', encoding='utf-8', skiprows=0, error_bad_lines=False)

tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)

words =  df['tokenized_sents']

#remove 100 most common words based on Brown corpus
fdist = FreqDist(brown.words())
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
    mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]

Out: ['the',
 ',',
 '.',
 'of',
 'and',
...]

#keep only most common words
fdist = FreqDist(words)
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
    mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]

TypeError: unhashable type: 'list'

There are many questions on unhashable lists, but none that I understand to be quite the same. Any suggestions? Thanks.


TRACEBACK

TypeError                                 Traceback (most recent call last)
<ipython-input-164-a0d17b850b10> in <module>()
  1 #keep only most common words
----> 2 fdist = FreqDist(words)
  3 mostcommon = fdist.most_common(100)
  4 mclist = []
  5 for i in range(len(mostcommon)):

/home/*******/anaconda3/envs/*******/lib/python3.5/site-packages/nltk/probability.py in __init__(self, samples)
    104         :type samples: Sequence
    105         """
--> 106         Counter.__init__(self, samples)
    107 
    108     def N(self):

/home/******/anaconda3/envs/******/lib/python3.5/collections/__init__.py in __init__(*args, **kwds)
    521             raise TypeError('expected at most 1 arguments, got %d' % len(args))
    522         super(Counter, self).__init__()
--> 523         self.update(*args, **kwds)
    524 
    525     def __missing__(self, key):

/home/******/anaconda3/envs/******/lib/python3.5/collections/__init__.py in update(*args, **kwds)
    608                     super(Counter, self).update(iterable) # fast path when counter is empty
    609             else:
--> 610                 _count_elements(self, iterable)
    611         if kwds:
    612             self.update(kwds)

TypeError: unhashable type: 'list'

回答1:


The FreqDist function takes in an iterable of hashable objects (made to be strings, but it probably works with whatever). The error you're getting is because you pass in an iterable of lists. As you suggested, this is because of the change you made:

df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)

If I understand the Pandas apply function documentation correctly, that line is applying the nltk.word_tokenize function to some series. word-tokenize returns a list of words.

As a solution, simply add the lists together before trying to apply FreqDist, like so:

allWords = []
for wordList in words:
    allWords += wordList
FreqDist(allWords)

A more complete revision to do what you would like. If all you need is to identify the second set of 100, note that mclist will have that the second time.

df = pd.read_csv('CountryResponses.csv', encoding='utf-8', skiprows=0, error_bad_lines=False)

tokenizer = RegexpTokenizer(r'\w+')
df['tokenized_sents'] = df['Responses'].apply(nltk.word_tokenize)

lists =  df['tokenized_sents']
words = []
for wordList in lists:
    words += wordList

#remove 100 most common words based on Brown corpus
fdist = FreqDist(brown.words())
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
    mclist.append(mostcommon[i][0])
words = [w for w in words if w not in mclist]

Out: ['the',
 ',',
 '.',
 'of',
 'and',
...]

#keep only most common words
fdist = FreqDist(words)
mostcommon = fdist.most_common(100)
mclist = []
for i in range(len(mostcommon)):
    mclist.append(mostcommon[i][0])
# mclist contains second-most common set of 100 words
words = [w for w in words if w in mclist]
# this will keep ALL occurrences of the words in mclist


来源:https://stackoverflow.com/questions/38666973/pandas-nltk-tokenizing-unhashable-type-list

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!