How to convert a map to Spark's RDD

有些话、适合烂在心里 提交于 2019-12-04 10:09:45

I guess you want something like this

import org.apache.spark.rdd.RDD
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint

// If you know this upfront, otherwise it can be computed
// using flatMap
// trainMap.values.flatMap(_._2.keys).max + 1
val nFeatures: Int = ??? 

val trainMap = Map(
  "x001" -> (-1, Map(0 -> 1.0, 3 -> 5.0)),
  "x002" -> (1, Map(2 -> 5.0, 3 -> 6.0)))

val trainRdd: RDD[(String, LabeledPoint)]  = sc
  // Convert Map to Seq so it can passed to parallelize
  .parallelize(trainMap.toSeq)
  .map{case (id, (labelInt, values)) => {

      // Convert nested map to Seq so it can be passed to Vector
      val features = Vectors.sparse(nFeatures, values.toSeq)

      // Convert label to Double so it can be used for LabeledPoint
      val label = labelInt.toDouble 

      (id, LabeledPoint(label, features))
 }}

It can be done in two ways

  1. sc.textFile("libsvm_data.txt").map(s => createObject())
  2. Convert map into collection of objects and use sc.parallelize()

The first one is preferrable.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!