问题
I have a MapReduce job written in Java. It depends on multiple classes. I want to run the MapReduce job on Spark.
What steps should I follow to do the same?
I need to make changes only to the MapReduce class?
Thanks!
回答1:
This is a very broad question, but the short of it is:
- Create an RDD of the input data.
- Call
map
with your mapper code. Output key-value pairs. - Call
reduceByKey
with your reducer code. - Write the resulting RDD to disk.
Spark is more flexible than MapReduce: there is a great variety of methods that you could use between steps 1 and 4 to transform the data.
来源:https://stackoverflow.com/questions/28889797/mapreduce-to-spark