问题
I have textRDD: org.apache.spark.rdd.RDD[(String, String)]
I would like to convert it to a DataFrame. The columns correspond to the title and content of each page(row).
回答1:
Use toDF()
, provide the column names if you have them.
val textDF = textRDD.toDF("title": String, "content": String)
textDF: org.apache.spark.sql.DataFrame = [title: string, content: string]
or
val textDF = textRDD.toDF()
textDF: org.apache.spark.sql.DataFrame = [_1: string, _2: string]
The shell auto-imports (I am using version 1.5), but you may need import sqlContext.implicits._
in an application.
回答2:
I usually do this like the following:
Create a case class like this:
case class DataFrameRecord(property1: String, property2: String)
Then you can use map to convert into the new structure using the case class:
rdd.map(p => DataFrameRecord(prop1, prop2)).toDF()
来源:https://stackoverflow.com/questions/33023330/convert-an-rdd-to-a-dataframe-in-spark-using-scala