I have a Spark dataframe with the following data (I use spark-csv to load the data in):
key,value 1,10 2,12 3,0 1,20
How about this? I agree this still converts to rdd then to dataframe.
df.select('key','value').map(lambda x: x).reduceByKey(lambda a,b: a+b).toDF(['key','value'])