How to convert pyspark.rdd.PipelinedRDD to Data frame with out using collect() method in Pyspark?

前端 未结 4 628
慢半拍i
慢半拍i 2020-12-18 02:22

I have pyspark.rdd.PipelinedRDD (Rdd1). when I am doing Rdd1.collect(),it is giving result like below.

 [(10, {3: 3.616726727464709         


        
4条回答
  •  夕颜
    夕颜 (楼主)
    2020-12-18 03:03

    This is how you can do it with scala

      val Rdd1 = spark.sparkContext.parallelize(Seq(
        (10, Map(3 -> 3.616726727464709, 4 -> 2.9996439803387602, 5 -> 1.6767412921625855)),
        (1, Map(3 -> 2.016527311459324, 4 -> -1.5271512313750577, 5 -> 1.9665475696370045)),
        (2, Map(3 -> 6.230272144805092, 4 -> 4.033642544526678, 5 -> 3.1517805604906313)),
        (3, Map(3 -> -0.3924680103722977, 4 -> 2.9757316477407443, 5 -> -1.5689126834176417))
      ))
    
      val x = Rdd1.flatMap(x => (x._2.map(y => (x._1, y._1, y._2))))
             .toDF("CId", "IId", "score")
    

    Output:

    +---+---+-------------------+
    |CId|IId|score              |
    +---+---+-------------------+
    |10 |3  |3.616726727464709  |
    |10 |4  |2.9996439803387602 |
    |10 |5  |1.6767412921625855 |
    |1  |3  |2.016527311459324  |
    |1  |4  |-1.5271512313750577|
    |1  |5  |1.9665475696370045 |
    |2  |3  |6.230272144805092  |
    |2  |4  |4.033642544526678  |
    |2  |5  |3.1517805604906313 |
    |3  |3  |-0.3924680103722977|
    |3  |4  |2.9757316477407443 |
    |3  |5  |-1.5689126834176417|
    +---+---+-------------------+ 
    

    Hope you can convert to pyspark.

提交回复
热议问题