I am creating a program to analyze PDF, DOC and DOCX files. These files are stored in HDFS.
When I start my MapReduce job, I want the map function to have the Filena
You can use WholeFileInputFormat (https://code.google.com/p/hadoop-course/source/browse/HadoopSamples/src/main/java/mr/wholeFile/?r=3)
In mapper name of the file u can get by this command:
public void map(NullWritable key, BytesWritable value, Context context) throws
IOException, InterruptedException
{
Path filePath= ((FileSplit)context.getInputSplit()).getPath();
String fileNameString = filePath.getName();
byte[] fileContent = value.getBytes();
}
The solution to this is to create your own FileInputFormat class that does this. You have access to the name of the input file from the FileSplit that this FileInputFormat receives (getPath). Be sure to overrule the isSplitable of your FileInputformat to always return false.
You will also need a custom RecordReader that returns the entire file as a single "Record" value.
Be careful in handling files that are too big. You will effectively load the entire file into RAM and the default setting for a task tracker is to have only 200MB RAM available.
As an alternative to your approach, maybe add the binary files to hdfs directly. Then, create an input file that contains the dfs paths for the all the binary files. This could be done dynamically using Hadoop's FileSystem class. Lastly, create a mapper that processes the input by opening input streams, again using FileSystem.