PySpark groupByKey returning pyspark.resultiterable.ResultIterable

后端 未结 6 1290
不思量自难忘°
不思量自难忘° 2021-01-30 16:24

I am trying to figure out why my groupByKey is returning the following:

[(0, ), (1, 

        
6条回答
  •  自闭症患者
    2021-01-30 16:47

    Say your code is..

    ex2 = ex1.groupByKey()
    

    And then you run..

    ex2.take(5)
    

    You're going to see an iterable. This is okay if you're going to do something with this data, you can just move on. But, if all you want is to print/see the values first before moving on, here is a bit of a hack..

    ex2.toDF().show(20, False)
    

    or just

    ex2.toDF().show()
    

    This will show the values of the data. You shouldn't use collect() because that will return data to the driver, and if you're working off a lot of data, that's going to blow up on you. Now if ex2 = ex1.groupByKey() was your final step, and you want those results returned, then yes use collect() but make sure that you know your data being returned is low volume.

    print(ex2.collect())
    

    Here is another nice post on using collect() on RDD

    View RDD contents in Python Spark?

提交回复
热议问题