How to find the distribution of a column in PySpark dataframe for all the unique values present in that column?

前端 未结 0 1367
广开言路
广开言路 2020-12-17 17:05

I have a PySpark dataframe-

df = spark.createDataFrame([
    ("u1", 0),
    ("u2", 0),
    ("u3", 1),
    ("u4", 2),
          


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题