问题
I'm trying to concatenate two dataframes, which look like that:
df1:
+---+---+
| a| b|
+---+---+
| a| b|
| 1| 2|
+---+---+
only showing top 2 rows
df2:
+---+---+
| c| d|
+---+---+
| c| d|
| 7| 8|
+---+---+
only showing top 2 rows
They both have the same number of rows, and I would like to do something like:
+---+---+---+---+
| a| b| c| d|
+---+---+---+---+
| a| b| c| d|
| 1| 2| 7| 8|
+---+---+---+---+
I tried:
df1=df1.withColumn('c', df2.c).collect()
df1=df1.withColumn('d', df2.d).collect()
But without success, gives me this error:
Traceback (most recent call last):
File "/usr/hdp/current/spark-client/python/pyspark/sql/utils.py", line 45, in deco
return f(*a, **kw)
File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o2804.withColumn.
Is there a way to that ?
Thanks
回答1:
Here is example of @Suresh proposal, add column rownumber
from pyspark.sql import functions as F
df1 = sqlctx.createDataFrame([('a','b'),('1','2')],['a','b']).withColumn("row_number", F.row_number().over(Window.partitionBy().orderBy("a")))
df2 = sqlctx.createDataFrame([('c','d'),('7','8')],['c','d']).withColumn("row_number", F.row_number().over(Window.partitionBy().orderBy("c")))
df3=df1.join(df2,df1.row_number==df2.row_number,'inner')\
.select(df1.a,df1.b,df2.c,df2.d)
df3=df1.join(df2,df1.row_number==df2.row_number,'inner').select(df1.a,df1.b,df2.c,df2.d)
df3.show()
来源:https://stackoverflow.com/questions/44305012/concatenate-two-dataframes-pyspark