问题
I can sucessfully kick of a hadoop streaming job from the terminal but i am looking for ways to start steaming jobs via an api, eclipse or some other means.
The closest i found was this post https://stackoverflow.com/questions/11564463/remotely-execute-hadoop-streaming-job but it has no answers!
Any ideas or suggestions would be welcome.
回答1:
Interesting question, I found a way to do this, hopefully this will help you too.
First method should work on Hadoop 0.22:
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://xxxxx:9000");
conf.set("mapred.job.tracker", "hdfs://xxxxx:9001");
StreamJob sj = new StreamJob();
try {
ToolRunner.run(conf, sj, new String[] {
"-D", "stream.tmpdir=c:\\",
"-mapper", "/path/to/mapper.py",
"-reducer", "/path/to/reducer.py", "-input",
"/path/to/input", "-output",
"/path/to/output" });
} catch (Exception e) {
e.printStackTrace();
}
I also found this Java wrapper which you should be able to run.
回答2:
Take a look at Apache Oozie - once you have defined your job via XML you can launch a job via an Http POST to the oozie server
回答3:
When the Hadoop streaming job is run as
hadoop jar /home/training/Installations/hadoop-1.0.3/contrib/streaming/hadoop-streaming-1.0.3.jar -input input4 -output output4 -mapper /home/training/Code/Streaming/max_temperature_map.rb -reducer /home/training/Code/Streaming/max_temperature_reduce.rb
then org.apache.hadoop.streaming.HadoopStreaming is executed. This class is defined in the MANIFEST.MF in the hadoop-streaming-1.0.3.jar. Check the code in the org.apache.hadoop.streaming.HadoopStreaming java class to know the API details.
来源:https://stackoverflow.com/questions/14248800/alternative-ways-to-start-hadoop-streaming-job