Get a range of columns of Spark RDD

天大地大妈咪最大 提交于 2019-12-01 18:12:27

If it is a DataFrame, then something like this should work:

val df = rdd.toDF
df.select(df.columns.slice(101,211) : _*)

Assuming you have an RDD of Array or any other scala collection (e.g., List). You can do something like this:

val data: RDD[Array[Int]] = sc.parallelize(Array(Array(1,2,3), Array(4,5,6)))
val sliced: RDD[Array[Int]] = data.map(_.slice(0,2))

sliced.collect()
> Array[Array[Int]] = Array(Array(1, 2), Array(4, 5))

Kind of old thread, but I recently had to do something similar and search around. I needed to select all but the last column where I had 200+ columns.

Spark 1.4.1
Scala 2.10.4

val df = hiveContext.sql("SELECT * FROM foobar")
val cols = df.columns.slice(0, df.columns.length - 1)
val new_df = df.select(cols.head, cols.tail:_*)
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!