Count number of words in each sentence Spark Dataframes

女生的网名这么多〃 提交于 2021-02-08 06:25:17

问题


I have a Spark Dataframe where each row has a review.

+--------------------+
|          reviewText| 
+--------------------+
|Spiritually and m...|
|This is one my mu...|
|This book provide...|
|I first read THE ...|
+--------------------+

I have tried:

SplitSentences = df.withColumn("split_sent",sentencesplit_udf(col('reviewText')))
SplitSentences = SplitSentences.select(SplitSentences.split_sent)

Then I created the function:

def word_count(text):
    return len(text.split())

wordcount_udf = udf(lambda x: word_count(x))

df2 = SplitSentences.withColumn("word_count", 
  wordcount_udf(col('split_sent')).cast(IntegerType())

I want to count the words of each sentence in each review (row) but it doesn't work.


回答1:


You can use split inbuilt function to split the sentences and use the size inbuilt function to count the length of the array as

df.withColumn("word_count", F.size(F.split(df['reviewText'], ' '))).show(truncate=False)

This way you won't need the expensive udf function

As an example, lets say you have following one sentence dataframe

+-----------------------------+
|reviewText                   |
+-----------------------------+
|this is text testing spliting|
+-----------------------------+

After applying above size and split function you should be getting

+-----------------------------+----------+
|reviewText                   |word_count|
+-----------------------------+----------+
|this is text testing spliting|5         |
+-----------------------------+----------+

If you have multiple sentences in one row as below

+----------------------------------------------------------------------------------+
|reviewText                                                                        |
+----------------------------------------------------------------------------------+
|this is text testing spliting. this is second sentence. And this is the third one.|
+----------------------------------------------------------------------------------+

Then you will have to write a udf function as below

from pyspark.sql import functions as F
def countWordsInEachSentences(array):
    return [len(x.split()) for x in array]

countWordsSentences = F.udf(lambda x: countWordsInEachSentences(x.split('. ')))

df.withColumn("word_count", countWordsSentences(df['reviewText'])).show(truncate=False)

which should give you

+----------------------------------------------------------------------------------+----------+
|reviewText                                                                        |word_count|
+----------------------------------------------------------------------------------+----------+
|this is text testing spliting. this is second sentence. And this is the third one.|[5, 4, 6] |
+----------------------------------------------------------------------------------+----------+

I hope the answer is helpful



来源:https://stackoverflow.com/questions/49267331/count-number-of-words-in-each-sentence-spark-dataframes

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!