Spark DataFrame equivalent to Pandas Dataframe `.iloc()` method?

后端 未结 3 445
广开言路
广开言路 2020-12-19 02:12

Is there a way to reference Spark DataFrame columns by position using an integer?

Analogous Pandas DataFrame operation:

df.iloc[:0] # Give me all the         


        
3条回答
  •  慢半拍i
    慢半拍i (楼主)
    2020-12-19 02:25

    Not really, but you can try something like this:

    Python:

    df = sc.parallelize([(1, "foo", 2.0)]).toDF()
    df.select(*df.columns[:1])  # I assume [:1] is what you really want
    ## DataFrame[_1: bigint]
    

    or

    df.select(df.columns[1:3])
    ## DataFrame[_2: string, _3: double]
    

    Scala

    val df = sc.parallelize(Seq((1, "foo", 2.0))).toDF()
    df.select(df.columns.slice(0, 1).map(col(_)): _*)
    

    Note:

    Spark SQL doesn't support and it is unlikely to ever support row indexing so it is not possible to index across row dimension.

提交回复
热议问题