How to convert Row of a Scala DataFrame into case class most efficiently?

前端 未结 4 876
情书的邮戳
情书的邮戳 2020-12-23 09:49

Once I have got in Spark some Row class, either Dataframe or Catalyst, I want to convert it to a case class in my code. This can be done by matching

someRow          


        
4条回答
  •  独厮守ぢ
    2020-12-23 10:11

    DataFrame is simply a type alias of Dataset[Row] . These operations are also referred as “untyped transformations” in contrast to “typed transformations” that come with strongly typed Scala/Java Datasets.

    The conversion from Dataset[Row] to Dataset[Person] is very simple in spark

    val DFtoProcess = SQLContext.sql("SELECT * FROM peoples WHERE name='test'")

    At this point, Spark converts your data into DataFrame = Dataset[Row], a collection of generic Row object, since it does not know the exact type.

    // Create an Encoders for Java class (In my eg. Person is a JAVA class)
    // For scala case class you can pass Person without .class reference
    val personEncoder = Encoders.bean(Person.class) 
    
    val DStoProcess = DFtoProcess.as[Person](personEncoder)
    

    Now, Spark converts the Dataset[Row] -> Dataset[Person] type-specific Scala / Java JVM object, as dictated by the class Person.

    Please refer to below link provided by databricks for further details

    https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html

提交回复
热议问题