问题
Here is my input data with four columns with space as the delimiter. I want to add the second and third column and print the result
sachin 200 10 2
sachin 900 20 2
sachin 500 30 3
Raju 400 40 4
Mike 100 50 5
Raju 50 60 6
My code is in the mid way
from pyspark import SparkContext
sc = SparkContext()
def getLineInfo(lines):
spLine = lines.split(' ')
name = str(spLine[0])
cash = int(spLine[1])
cash2 = int(spLine[2])
cash3 = int(spLine[3])
return (name,cash,cash2)
myFile = sc.textFile("D:\PYSK\cash.txt")
rdd = myFile.map(getLineInfo)
print rdd.collect()
From here I got the result as
[('sachin', 200, 10), ('sachin', 900, 20), ('sachin', 500, 30), ('Raju', 400, 40
), ('Mike', 100, 50), ('Raju', 50, 60)]
Now the final result I need is as below, adding the 2nd and 3rd column and display the remaining fields
sachin 210 2
sachin 920 2
sachin 530 3
Raju 440 4
Mike 150 5
Raju 110 6
回答1:
Use this:
def getLineInfo(lines):
spLine = lines.split(' ')
name = str(spLine[0])
cash = int(spLine[1])
cash2 = int(spLine[2])
cash3 = int(spLine[3])
return (name, cash + cash2, cash3)
来源:https://stackoverflow.com/questions/39392237/how-to-add-multiple-columns-in-apache-spark