问题
I have a Spark Dataframe where each row has a review.
+--------------------+
| reviewText|
+--------------------+
|Spiritually and m...|
|This is one my mu...|
|This book provide...|
|I first read THE ...|
+--------------------+
I have tried:
SplitSentences = df.withColumn("split_sent",sentencesplit_udf(col('reviewText')))
SplitSentences = SplitSentences.select(SplitSentences.split_sent)
Then I created the function:
def word_count(text):
return len(text.split())
wordcount_udf = udf(lambda x: word_count(x))
df2 = SplitSentences.withColumn("word_count",
wordcount_udf(col('split_sent')).cast(IntegerType())
I want to count the words of each sentence in each review (row) but it doesn't work.
回答1:
You can use split
inbuilt function to split the sentences and use the size
inbuilt function to count the length of the array as
df.withColumn("word_count", F.size(F.split(df['reviewText'], ' '))).show(truncate=False)
This way you won't need the expensive udf function
As an example, lets say you have following one sentence dataframe
+-----------------------------+
|reviewText |
+-----------------------------+
|this is text testing spliting|
+-----------------------------+
After applying above size
and split
function you should be getting
+-----------------------------+----------+
|reviewText |word_count|
+-----------------------------+----------+
|this is text testing spliting|5 |
+-----------------------------+----------+
If you have multiple sentences in one row as below
+----------------------------------------------------------------------------------+
|reviewText |
+----------------------------------------------------------------------------------+
|this is text testing spliting. this is second sentence. And this is the third one.|
+----------------------------------------------------------------------------------+
Then you will have to write a udf
function as below
from pyspark.sql import functions as F
def countWordsInEachSentences(array):
return [len(x.split()) for x in array]
countWordsSentences = F.udf(lambda x: countWordsInEachSentences(x.split('. ')))
df.withColumn("word_count", countWordsSentences(df['reviewText'])).show(truncate=False)
which should give you
+----------------------------------------------------------------------------------+----------+
|reviewText |word_count|
+----------------------------------------------------------------------------------+----------+
|this is text testing spliting. this is second sentence. And this is the third one.|[5, 4, 6] |
+----------------------------------------------------------------------------------+----------+
I hope the answer is helpful
来源:https://stackoverflow.com/questions/49267331/count-number-of-words-in-each-sentence-spark-dataframes