How to implement string matching algorithm with Hadoop?

跟風遠走 提交于 2019-12-13 09:10:19

问题


I want to implement a string matching(Boyer-Moore) algorithm using Hadoop. I just started using Hadoop so I have no idea how to write a Hadoop program in Java.

All the sample programs that I have seen so far are word counting examples and I couldn't find any sample programs for string matching.

I tried searching for some tutorials that teaches how to write Hadoop applications using Java but couldn't find any. Can you suggest me some tutorials where I can learn how to write Hadoop applications using Java.

Thanks in advance.


回答1:


I haven't tested the below code, But this should get you started. I have used the BoyerMoore implementation available here

What the below code is doing:

The goal is to search for a pattern in an input document. The BoyerMoore class is initialized in the setup method using the pattern set in the configuration.

The mapper receives each line at a time and it uses the BoyerMoore instance to find the pattern. If match is found, the we write it using context.

There is no need of a reducer here. If the pattern is found multiple times in different mapper then the output will have multiple offsets(1 per mapper).

package hadoop.boyermoore;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class BoyerMooreImpl {


      public static class TokenizerMapper
           extends Mapper<Object, Text, Text, IntWritable>{
        private BoyerMoore boyerMoore;
        private static IntWritable offset;
        private Text offsetFound = new Text("offset");

        public void map(Object key, Text value, Context context
                        ) throws IOException, InterruptedException {
          StringTokenizer itr = new StringTokenizer(value.toString());
          while (itr.hasMoreTokens()) {
              String line = itr.nextToken();
              int offset1 = boyerMoore.search(line);
              if (line.length() != offset1) {
                  offset = new IntWritable(offset1);
                  context.write(offsetFound,offset);
              }
          }
        }
        @Override
        public final void setup(Context context) {
            if (boyerMoore == null)
                boyerMoore = new BoyerMoore(context.getConfiguration().get("pattern"));
        }
      }


      public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        conf.set("pattern","your_pattern_here");
        Job job = Job.getInstance(conf, "BoyerMoore");
        job.setJarByClass(BoyerMooreImpl.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
      }
}



回答2:


I don't know if this is the correct implementation to run an algorithm in parallel, but this is what I figured out,

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.conf.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.util.*;

public class StringMatching extends Configured implements Tool {

  public static void main(String args[]) throws Exception {
      long start = System.currentTimeMillis();
      int res = ToolRunner.run(new StringMatching(), args);
      long end = System.currentTimeMillis();
      System.exit((int)(end-start));
  }

  public int run(String[] args) throws Exception {
    Path inputPath = new Path(args[0]);
    Path outputPath = new Path(args[1]);

    Configuration conf = getConf();
    Job job = new Job(conf, this.getClass().toString());

    FileInputFormat.setInputPaths(job, inputPath);
    FileOutputFormat.setOutputPath(job, outputPath);

    job.setJobName("StringMatching");
    job.setJarByClass(StringMatching.class);
    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);
    job.setMapOutputKeyClass(Text.class);
    job.setMapOutputValueClass(IntWritable.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    job.setMapperClass(Map.class);
    job.setCombinerClass(Reduce.class);
    job.setReducerClass(Reduce.class);

    return job.waitForCompletion(true) ? 0 : 1;
  }

  public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    @Override
    public void map(LongWritable key, Text value,
                    Mapper.Context context) throws IOException, InterruptedException {
      String line = value.toString();
      StringTokenizer tokenizer = new StringTokenizer(line);
      while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {

    @Override
    public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
        BoyerMoore bm = new BoyerMoore(); 
        boolean flag = bm.findPattern(key.toString().trim().toLowerCase(), "abc");
        if(flag){
            context.write(key, new IntWritable(1));
        }else{
            context.write(key, new IntWritable(0));
        }
    }
  }

}

I'm using AWS(Amazon Web Services) so I can select the number of nodes from the console that I want my program to run on simultaneously. So I'm assuming that the map and reduce methods that I have used should be enough for running the Boyer-Moore string matching algorithm in parallel.



来源:https://stackoverflow.com/questions/33685079/how-to-implement-string-matching-algorithm-with-hadoop

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!