Launch a mapreduce job from eclipse

后端 未结 3 466
醉话见心
醉话见心 2020-12-09 13:21

I\'ve written a mapreduce program in Java, which I can submit to a remote cluster running in distributed mode. Currently, I submit the job using the following steps:

相关标签:
3条回答
  • 2020-12-09 13:29

    I have used this method from the following website to configure a Map/Reduce project of mine to run the project using Eclipse (w/o exporting project as JAR) Configuring Eclipse to run Hadoop Map/Reduce project

    Note: If you decide to debug you program, your Mapper class and Reducer class won't be debug-able.

    Hope it helps. :)

    0 讨论(0)
  • 2020-12-09 13:36

    What worked for me was exporting a runnable JAR (the difference between it and a JAR is that the first defines the class, which has the main method) and selecting the "packaging required libraries into JAR" option (choosing the "extracting..." option leads to duplicate errors and it also has to extract the class files from the jars, which, ultimately, in my case, resulted in not resolving the class not found exception).

    After that, you can just set the jar, as was suggested by Chris White. For Windows it would look like this: job.setJar("C:\\\MyJar.jar");

    If it helps anybody, I made a presentation on what I learned from creating a MapReduce project and running it in Hadoop 2.2.0 in Windows 7 (in Eclipse Luna)

    0 讨论(0)
  • 2020-12-09 13:41

    If you're submitting the hadoop job from within the Eclipse project that defines the classes for the job then you most probably have a classpath problem.

    The job.setjarByClass(CountRows.class) call is finding the class file on the build classpath, and not in the CountRows.jar (which may or may not have been built yet, or even on the classpath).

    You should be able to assert this is true by printing out the result of job.getJar() after you call job.setjarByClass(..), and if it doesn't display a jar filepath, then it's found the build class, rather than the jar'd class

    0 讨论(0)
提交回复
热议问题