How to get the input file name in the mapper in a Hadoop program?

后端 未结 10 2020
粉色の甜心
粉色の甜心 2020-11-29 18:48

How I can get the name of the input file within a mapper? I have multiple input files stored in the input directory, each mapper may read a different file, and I need to kno

相关标签:
10条回答
  • 2020-11-29 19:09

    The answers which advocate casting to FileSplit will no longer work, as FileSplit instances are no longer returned for multiple inputs (so you will get a ClassCastException). Instead, org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit instances are returned. Unfortunately, the TaggedInputSplit class is not accessible without using reflection. So here's a utility class I wrote for this. Just do:

    Path path = MapperUtils.getPath(context.getInputSplit());
    

    in your Mapper.setup(Context context) method.

    Here is the source code for my MapperUtils class:

    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.mapreduce.InputSplit;
    import org.apache.hadoop.mapreduce.lib.input.FileSplit;
    
    import java.lang.invoke.MethodHandle;
    import java.lang.invoke.MethodHandles;
    import java.lang.invoke.MethodType;
    import java.lang.reflect.Method;
    import java.util.Optional;
    
    public class MapperUtils {
    
        public static Path getPath(InputSplit split) {
            return getFileSplit(split).map(FileSplit::getPath).orElseThrow(() -> 
                new AssertionError("cannot find path from split " + split.getClass()));
        }
    
        public static Optional<FileSplit> getFileSplit(InputSplit split) {
            if (split instanceof FileSplit) {
                return Optional.of((FileSplit)split);
            } else if (TaggedInputSplit.clazz.isInstance(split)) {
                return getFileSplit(TaggedInputSplit.getInputSplit(split));
            } else {
                return Optional.empty();
            }
        }
    
        private static final class TaggedInputSplit {
            private static final Class<?> clazz;
            private static final MethodHandle method;
    
            static {
                try {
                    clazz = Class.forName("org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit");
                    Method m = clazz.getDeclaredMethod("getInputSplit");
                    m.setAccessible(true);
                    method = MethodHandles.lookup().unreflect(m).asType(
                        MethodType.methodType(InputSplit.class, InputSplit.class));
                } catch (ReflectiveOperationException e) {
                    throw new AssertionError(e);
                }
            }
    
            static InputSplit getInputSplit(InputSplit o) {
                try {
                    return (InputSplit) method.invokeExact(o);
                } catch (Throwable e) {
                    throw new AssertionError(e);
                }
            }
        }
    
        private MapperUtils() { }
    
    }
    
    0 讨论(0)
  • 2020-11-29 19:09
    package com.foo.bar;
    
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.mapreduce.InputSplit;
    import org.apache.hadoop.mapreduce.lib.input.FileSplit;
    
    import java.lang.invoke.MethodHandle;
    import java.lang.invoke.MethodHandles;
    import java.lang.invoke.MethodType;
    import java.lang.reflect.Method;
    
    public class MapperUtils {
    
        public static Path getPath(InputSplit split) {
            FileSplit fileSplit = getFileSplit(split);
            if (fileSplit == null) {
                throw new AssertionError("cannot find path from split " + split.getClass());
            } else {
                return fileSplit.getPath();
            }
        }
    
        public static FileSplit getFileSplit(InputSplit split) {
            if (split instanceof FileSplit) {
                return (FileSplit)split;
            } else if (TaggedInputSplit.clazz.isInstance(split)) {
                return getFileSplit(TaggedInputSplit.getInputSplit(split));
            } else {
                return null;
            }
        }
    
        private static final class TaggedInputSplit {
            private static final Class<?> clazz;
            private static final MethodHandle method;
    
            static {
                try {
                    clazz = Class.forName("org.apache.hadoop.mapreduce.lib.input.TaggedInputSplit");
                    Method m = clazz.getDeclaredMethod("getInputSplit");
                    m.setAccessible(true);
                    method = MethodHandles.lookup().unreflect(m).asType(
                        MethodType.methodType(InputSplit.class, InputSplit.class));
                } catch (ReflectiveOperationException e) {
                    throw new AssertionError(e);
                }
            }
    
            static InputSplit getInputSplit(InputSplit o) {
                try {
                    return (InputSplit) method.invokeExact(o);
                } catch (Throwable e) {
                    throw new AssertionError(e);
                }
            }
        }
    
        private MapperUtils() { }
    
    }
    

    I rewrite the code hans-brende provides in Java 7, it worked. But there is a problem that

    File Input Format Counters Bytes Read=0 Bytes Read is zero if using MultipleInputs.

    0 讨论(0)
  • 2020-11-29 19:13

    If you are using Hadoop Streaming, you can use the JobConf variables in a streaming job's mapper/reducer.

    As for the input file name of mapper, see the Configured Parameters section, the map.input.file variable (the filename that the map is reading from) is the one can get the jobs done. But note that:

    Note: During the execution of a streaming job, the names of the "mapred" parameters are transformed. The dots ( . ) become underscores ( _ ). For example, mapred.job.id becomes mapred_job_id and mapred.jar becomes mapred_jar. To get the values in a streaming job's mapper/reducer use the parameter names with the underscores.


    For example, if you are using Python, then you can put this line in your mapper file:

    import os
    file_name = os.getenv('map_input_file')
    print file_name
    
    0 讨论(0)
  • 2020-11-29 19:16

    If you're using the regular InputFormat, use this in your Mapper:

    InputSplit is = context.getInputSplit();
    Method method = is.getClass().getMethod("getInputSplit");
    method.setAccessible(true);
    FileSplit fileSplit = (FileSplit) method.invoke(is);
    String currentFileName = fileSplit.getPath().getName()
    

    If you're using CombineFileInputFormat, it's a different approach because it combines several small files into one relatively big file (depends on your configuration). Both the Mapper and RecordReader run on the same JVM so you can pass data between them when running. You need to implement your own CombineFileRecordReaderWrapper and do as follows:

    public class MyCombineFileRecordReaderWrapper<K, V> extends RecordReader<K, V>{
    ...
    private static String mCurrentFilePath;
    ...
    public void initialize(InputSplit combineSplit , TaskAttemptContext context) throws IOException, InterruptedException {
            assert this.fileSplitIsValid(context);
            mCurrentFilePath = mFileSplit.getPath().toString();
            this.mDelegate.initialize(this.mFileSplit, context);
        }
    ...
    public static String getCurrentFilePath() {
            return mCurrentFilePath;
        }
    ...
    

    Then, in your Mapper, use this:

    String currentFileName = MyCombineFileRecordReaderWrapper.getCurrentFilePath()
    

    Hope I helped :-)

    0 讨论(0)
  • 2020-11-29 19:19

    Noticed on Hadoop 2.4 and greater using the old api this method produces a null value

    String fileName = new String();
    public void configure(JobConf job)
    {
       fileName = job.get("map.input.file");
    }
    

    Alternatively you can utilize the Reporter object passed to your map function to get the InputSplit and cast to a FileSplit to retrieve the filename

    public void map(LongWritable offset, Text record,
            OutputCollector<NullWritable, Text> out, Reporter rptr)
            throws IOException {
    
        FileSplit fsplit = (FileSplit) rptr.getInputSplit();
        String inputFileName = fsplit.getPath().getName();
        ....
    }
    
    0 讨论(0)
  • 2020-11-29 19:19

    For org.apache.hadood.mapred package the map function signature should be:

    map(Object, Object, OutputCollector, Reporter) 
    

    So, to get the file name inside the map function, you could use the Reporter object like this:

    String fileName = ((FileSplit) reporter.getInputSplit()).getPath().getName();
    
    0 讨论(0)
提交回复
热议问题