Data sharing in Hadoop Map Reduce chaining

后端 未结 2 584
天命终不由人
天命终不由人 2021-01-06 20:28

Is it possible to share a value between successive reducer and mapper?

Or is it possible to store the output of first reducer into memory and second mapper can acces

相关标签:
2条回答
  • 2021-01-06 20:42

    If number of distinct rows produced by Reducer1 is small (say you have 10000 (id,price) tuples), using two stage processing is preferred. You can load results from first map/reduce into memory in each Map2 mapper and filter input data. So, no uneeded data will be transferred via network and all data will be processed locally. With use of combineres amount of data can be even less.

    In case of huge amount of distinct rows looks like you need to read data twice.

    0 讨论(0)
  • 2021-01-06 21:04

    Each job is independent of each other, so without storing the output in intermediate location it's not possible to share the data across jobs.

    FYI, in MapReduce model the map tasks don't talk to each other. Same is the case for reduce tasks also. Apache Giraph which runs on Hadoop uses communication between the mappers in the same job for iterative algorithms which requires the same job to be run again and again without communication between the mappers.

    Not sure about the algorithm being implemented and why MR, but every MR algorithm can be implemented in BSP also. Here is a paper comparing BSP with MR. Some of the algorithms perform well in BSP when compared to MR. Apache Hama is an implementation of the BSP model, the way Apache Hadoop is an implementation of MR.

    0 讨论(0)
提交回复
热议问题