Converting RDD[org.apache.spark.sql.Row] to RDD[org.apache.spark.mllib.linalg.Vector]

こ雲淡風輕ζ 提交于 2019-12-19 05:47:38

问题


I am relatively new to Spark and Scala.

I am starting with the following dataframe (single column made out of a dense Vector of Doubles):

scala> val scaledDataOnly_pruned = scaledDataOnly.select("features")
scaledDataOnly_pruned: org.apache.spark.sql.DataFrame = [features: vector]

scala> scaledDataOnly_pruned.show(5)
+--------------------+
|            features|
+--------------------+
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
|[-0.0948337274182...|
+--------------------+

A straight conversion to RDD yields an instance of org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] :

scala> val scaledDataOnly_rdd = scaledDataOnly_pruned.rdd
scaledDataOnly_rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[32] at rdd at <console>:66

Does anyone know how to convert this DF to an instance of org.apache.spark.rdd.RDD[org.apache.spark.mllib.linalg.Vector] instead? My various attempts have been unsuccessful so far.

Thank you in advance for any pointers!


回答1:


Just found out:

val scaledDataOnly_rdd = scaledDataOnly_pruned.map{x:Row => x.getAs[Vector](0)}



回答2:


EDIT: use more sophisticated way to interpret fields in Row.

This is worked for me

val featureVectors = features.map(row => {
  Vectors.dense(row.toSeq.toArray.map({
    case s: String => s.toDouble
    case l: Long => l.toDouble
    case _ => 0.0
  }))
})

features is a DataFrame of spark SQL.




回答3:


import org.apache.spark.mllib.linalg.Vectors

scaledDataOnly
   .rdd
   .map{
      row => Vectors.dense(row.getAs[Seq[Double]]("features").toArray)
     }


来源:https://stackoverflow.com/questions/33048177/converting-rddorg-apache-spark-sql-row-to-rddorg-apache-spark-mllib-linalg-ve

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!