Converting series from pandas to pyspark: need to use “groupby” and “size”, but pyspark yields error

不打扰是莪最后的温柔 提交于 2021-02-17 07:03:31

问题


I am converting some code from Pandas to pyspark. In pandas, lets imagine I have the following mock dataframe, df:

And in pandas, I define a certain variable the following way:

value = df.groupby(["Age", "Siblings"]).size()

And the output is a series as follows:

However, when trying to covert this to pyspark, an error comes up: AttributeError: 'GroupedData' object has no attribute 'size'. Can anyone help me solve this?


回答1:


The equivalent of size in pyspark is count:

df.groupby(["Age", "Siblings"]).count()



回答2:


You can also use the agg method, which is more flexible as it allows you to set column alias or add other types of aggregations:

import pyspark.sql.functions as F

df.groupby('Age', 'Siblings').agg(F.count('*').alias('count'))


来源:https://stackoverflow.com/questions/65707148/converting-series-from-pandas-to-pyspark-need-to-use-groupby-and-size-but

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!