How to get Filename/File Contents as key/value input for MAP when running a Hadoop MapReduce Job?

£可爱£侵袭症+ 提交于 2019-11-29 07:34:32
Niels Basjes

The solution to this is to create your own FileInputFormat class that does this. You have access to the name of the input file from the FileSplit that this FileInputFormat receives (getPath). Be sure to overrule the isSplitable of your FileInputformat to always return false.

You will also need a custom RecordReader that returns the entire file as a single "Record" value.

Be careful in handling files that are too big. You will effectively load the entire file into RAM and the default setting for a task tracker is to have only 200MB RAM available.

As an alternative to your approach, maybe add the binary files to hdfs directly. Then, create an input file that contains the dfs paths for the all the binary files. This could be done dynamically using Hadoop's FileSystem class. Lastly, create a mapper that processes the input by opening input streams, again using FileSystem.

You can use WholeFileInputFormat (https://code.google.com/p/hadoop-course/source/browse/HadoopSamples/src/main/java/mr/wholeFile/?r=3)

In mapper name of the file u can get by this command:

public void map(NullWritable key, BytesWritable value, Context context) throws 
IOException, InterruptedException 
{       

Path filePath= ((FileSplit)context.getInputSplit()).getPath();
String fileNameString = filePath.getName();

byte[] fileContent = value.getBytes();

}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!