PySpark: Add a new column with a tuple created from columns

送分小仙女□ 提交于 2020-01-02 05:28:08

问题


Here I have a dateframe created as follow,

df = spark.createDataFrame([('a',5,'R','X'),('b',7,'G','S'),('c',8,'G','S')], 
                       ["Id","V1","V2","V3"])

It looks like

+---+---+---+---+
| Id| V1| V2| V3|
+---+---+---+---+
|  a|  5|  R|  X|
|  b|  7|  G|  S|
|  c|  8|  G|  S|
+---+---+---+---+

I'm looking to add a column that is a tuple consisting of V1,V2,V3.

The result should look like

+---+---+---+---+-------+
| Id| V1| V2| V3|V_tuple|
+---+---+---+---+-------+
|  a|  5|  R|  X|(5,R,X)|
|  b|  7|  G|  S|(7,G,S)|
|  c|  8|  G|  S|(8,G,S)|
+---+---+---+---+-------+

I've tried to use similar syntex as in Python but it didn't work:

df.withColumn("V_tuple",list(zip(df.V1,df.V2,df.V3)))

TypeError: zip argument #1 must support iteration.

Any help would be appreciated!


回答1:


I'm coming from scala but I do believe that there's a similar way in python :

Using sql.functions package mehtod :

If you want to get a StructType with this three column use the struct(cols: Column*): Column method like this :

from pyspark.sql.functions import struct
df.withColumn("V_tuple",struct(df.V1,df.V2,df.V3))

but if you want to get it as a String you can use the concat(exprs: Column*): Column method like this :

from pyspark.sql.functions import concat
df.withColumn("V_tuple",concat(df.V1,df.V2,df.V3))

With this second method you may have to cast the columns into Strings

I'm not sure about the python syntax, Just edit the answer if there's a syntax error.

Hope this help you. Best Regards




回答2:


Use struct:

from pyspark.sql.functions import struct

df.withColumn("V_tuple", struct(df.V1,df.V2,df.V3))


来源:https://stackoverflow.com/questions/44067861/pyspark-add-a-new-column-with-a-tuple-created-from-columns

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!