MapReduce to Spark

南楼画角 提交于 2019-12-14 00:12:56

问题


I have a MapReduce job written in Java. It depends on multiple classes. I want to run the MapReduce job on Spark.

What steps should I follow to do the same?

I need to make changes only to the MapReduce class?

Thanks!


回答1:


This is a very broad question, but the short of it is:

  1. Create an RDD of the input data.
  2. Call map with your mapper code. Output key-value pairs.
  3. Call reduceByKey with your reducer code.
  4. Write the resulting RDD to disk.

Spark is more flexible than MapReduce: there is a great variety of methods that you could use between steps 1 and 4 to transform the data.



来源:https://stackoverflow.com/questions/28889797/mapreduce-to-spark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!