Converting series from pandas to pyspark: need to use “groupby” and “size”, but pyspark yields error

前端 未结 0 590
轮回少年
轮回少年 2021-01-14 01:18

I am converting some code from Pandas to pyspark. In pandas, lets imagine I have the following mock dataframe, df:

And in pandas, I define a certain variable

相关标签:
回答
  • 消灭零回复
提交回复
热议问题