spark - scala: not a member of org.apache.spark.sql.Row

筅森魡賤 提交于 2019-12-23 08:25:33

问题


I am trying to convert a data frame to RDD, then perform some operations below to return tuples:

df.rdd.map { t=>
 (t._2 + "_" + t._3 , t)
}.take(5)

Then I got the error below. Anyone have any ideas? Thanks!

<console>:37: error: value _2 is not a member of org.apache.spark.sql.Row
               (t._2 + "_" + t._3 , t)
                  ^

回答1:


When you convert a DataFrame to RDD, you get an RDD[Row], so when you use map, your function receives a Row as parameter. Therefore, you must use the Row methods to access its members (note that the index starts from 0):

df.rdd.map { 
  row: Row => (row.getString(1) + "_" + row.getString(2), row)
}.take(5)

You can view more examples and check all methods available for Row objects in the Spark scaladoc.

Edit: I don't know the reason why you are doing this operation, but for concatenating String columns of a DataFrame you may consider the following option:

import org.apache.spark.sql.functions._
val newDF = df.withColumn("concat", concat(df("col2"), lit("_"), df("col3")))



回答2:


You can access every element of a Row like if it was a List or Array, it means using (index), however you can use the method get also.

For example:

df.rdd.map {t =>
  (t(2).toString + "_" + t(3).toString, t)
}.take(5)


来源:https://stackoverflow.com/questions/37335416/spark-scala-not-a-member-of-org-apache-spark-sql-row

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!