Working of RecordReader in Hadoop

放肆的年华 提交于 2019-12-09 05:31:11

问题


Can anyone explain how the RecordReader actually works? How are the methods nextkeyvalue(), getCurrentkey() and getprogress() work after the program starts executing?


回答1:


(new API): The default Mapper class has a run method which looks like this:

public void run(Context context) throws IOException, InterruptedException {
    setup(context);
    while (context.nextKeyValue()) {
        map(context.getCurrentKey(), context.getCurrentValue(), context);
    }
    cleanup(context);
}

The Context.nextKeyValue(), Context.getCurrentKey() and Context.getCurrentValue() methods are wrappers for the RecordReader methods. See the source file src/mapred/org/apache/hadoop/mapreduce/MapContext.java.

So this loop executes and calls your Mapper implementation's map(K, V, Context) method.

Specifically, what else would you like to know?




回答2:


org.apache.hadoop.mapred.MapTask - runNewMapper()

Imp steps:

  1. creates new mapper

  2. get input split for the mapper

  3. get recordreader for the split

  4. initialize record reader

  5. using record reader iterate through getNextKeyVal() and pass key,val to mappers map method

  6. clean up



来源:https://stackoverflow.com/questions/10943472/working-of-recordreader-in-hadoop

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!