写好的hadoop任务打成jar后,可以在服务器上用命令hadoop jar 提交。但开发阶段总不能一直用这种方式来调试,最好是在本机的ide 上可以直接debug。
如果在wiindow上配置一套开发调试环境,说实话真是觉得很不爽。为了以后方便,整理了一下windows 下hadoop开发环境的配置和调试过程。
首先本地下载一个和服务器相同版本的hadoop安装包。
然后配置一个HADOOPHOME环境变量
还要去网上下载window的工具包,需要有下面两个文件。下载的时候要注意,如果你是安装的32位Jdk 那么下32位的工具包。
把下后的两个文件拷贝到HADOOPHOME bin目录下。
到这里配置还没有完,除了上面几个配置外,不同的场景还需要不同的配置。(我也是很无语)。
为了便于区分,接下来,我将列出本地Hadoop开发调试的四种场景。
1,本机访问Hdfs数据
如果只是访问远程hdfs目录和文件,需要有%HADOOP_HOME%,还有%HADOOP_HOME%\bin\winutils.exe就可以了。
另外运行的时候会有访问权限的问题。主要是纠结hadoop 服务器的用户名和你本地机器用户名不一样。
解决办法有两种
方法1:hdfs-site.xml 加上如下配置,重启。
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
方法2:加一个环境变量HADOOP_USER_NAME = 你的hadoop用户名。
访问路径的问题:
如果目录不加hdfs:"/user/rihai/logdata/tmp" 默认会访问你本地目录。
如果想访问hdfs 目录则需要在project resource目录下放置core-site.xml。
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/rihai/hadoop/tmp</value>
</property>
</configuration>
或者代码里指定Configuration
conf.set("fs.default.name", "hdfs://master:9000")
2,本机MapReduce,本地目录做数据源
方便说明,写一个简单的例子,我用的IDE是 IntelliJ IDEA,先创建一个Maven project.
pom.xml 配置

?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.rihai.hadoop</groupId>
<artifactId>demo</artifactId>
<packaging>pom</packaging>
<version>1.0-SNAPSHOT</version>
<modules>
<module>hdfs</module>
<module>job</module>
</modules>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.0</version>
</dependency>
</project>
代码如下:

1 package com.rihai.hadoop.job;
2
3 import org.apache.hadoop.conf.Configuration;
4 import org.apache.hadoop.fs.FileSystem;
5 import org.apache.hadoop.fs.Path;
6 import org.apache.hadoop.io.IntWritable;
7 import org.apache.hadoop.io.Text;
8 import org.apache.hadoop.mapreduce.Job;
9 import org.apache.hadoop.mapreduce.Mapper;
10 import org.apache.hadoop.mapreduce.Reducer;
11 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
12 import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
13 import org.apache.hadoop.util.GenericOptionsParser;
14 import org.apache.hadoop.yarn.util.SystemClock;
15 import org.apache.hadoop.yarn.webapp.ResponseInfo;
16
17 import java.io.IOException;
18 import java.util.StringTokenizer;
19
20
21 /**
22 * Created by rihaizhang on 2016/9/21.
23 */
24 public class MrDebug {
25
26 /**
27 * mapper
28 */
29 public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
30
31 private final static IntWritable one = new IntWritable(1);
32 private Text word = new Text();
33
34 @Override
35 public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
36
37 StringTokenizer stk = new StringTokenizer(value.toString());
38
39 while (stk.hasMoreTokens()) {
40 word.set(stk.nextToken());
41 context.write(word, one);
42 }
43 }
44
45 }
46
47 /**
48 * reducer
49 */
50 public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
51
52 private final static IntWritable sum = new IntWritable(1);
53
54 @Override
55 public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
56
57 int temp = 0;
58 for (IntWritable val : values) {
59 temp += val.get();
60 }
61 sum.set(temp);
62 context.write(key, sum);
63 }
64 }
65
66 private static void deleteOutput(Configuration conf, String path) throws IOException {
67 try( FileSystem fs = FileSystem.get(conf)) {
68 Path outPath = new Path(path);
69 if (fs.exists(outPath)) {
70 boolean delResult = fs.delete(outPath, true);
71 if (delResult) {
72 System.out.println(path + " deleted.");
73 } else {
74 System.out.println(path + " delete failed.");
75 }
76 }
77 }
78
79 }
80
81 public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
82
83 System.out.println("begin");
84 Configuration conf = new Configuration();
85
86 String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
87
88 if (otherArgs.length < 2) {
89 System.err.println("Usage: wordcount <in> [<in>...] <out>");
90 System.exit(2);
91 }
92
93 deleteOutput(conf,otherArgs[otherArgs.length - 1]);
94
95 Job job = Job.getInstance(conf, "mrdebug");
96 job.setJarByClass(MrDebug.class);
97 job.setMapperClass(TokenizerMapper.class);
98 job.setCombinerClass(IntSumReducer.class);
99 job.setReducerClass(IntSumReducer.class);
100 job.setOutputKeyClass(Text.class);
101 job.setOutputValueClass(IntWritable.class);
102
103 for (int i = 0; i < otherArgs.length - 1; i++) {
104 FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
105 }
106 FileOutputFormat.setOutputPath(job, new Path(otherArgs[otherArgs.length - 1]));
107
108 System.exit(job.waitForCompletion(true) ? 0 : 1);
109
110 }
111
112 }
在resources 加上Log4j的配置,不然会有警告。
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE
调试前还得PATH环境变量追加;%HADOOP_HOME%\bin,因为要用到上面工具包的两个文件。注意:Ide 需要重启
设置运行参数为本地目录:
然后运行:

D:\Java\jdk1.7.0_65\bin\java -Didea.launcher.port=7533 "-Didea.launcher.bin.path=C:\Program Files (x86)\JetBrains\IntelliJ IDEA 2016.2\bin" -Dfile.encoding=UTF-8 -classpath "D:\Java\jdk1.7.0_65\jre\lib\charsets.jar;D:\Java\jdk1.7.0_65\jre\lib\deploy.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\access-bridge-32.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\dnsns.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\jaccess.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\localedata.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\sunec.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\sunjce_provider.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\sunmscapi.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\sunpkcs11.jar;D:\Java\jdk1.7.0_65\jre\lib\ext\zipfs.jar;D:\Java\jdk1.7.0_65\jre\lib\javaws.jar;D:\Java\jdk1.7.0_65\jre\lib\jce.jar;D:\Java\jdk1.7.0_65\jre\lib\jfr.jar;D:\Java\jdk1.7.0_65\jre\lib\jfxrt.jar;D:\Java\jdk1.7.0_65\jre\lib\jsse.jar;D:\Java\jdk1.7.0_65\jre\lib\management-agent.jar;D:\Java\jdk1.7.0_65\jre\lib\plugin.jar;D:\Java\jdk1.7.0_65\jre\lib\resources.jar;D:\Java\jdk1.7.0_65\jre\lib\rt.jar;D:\IdeaProjects\hadoop-demo\job\target\classes;D:\Java\maven\org\apache\hadoop\hadoop-client\2.6.0\hadoop-client-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-common\2.6.0\hadoop-common-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-annotations\2.6.0\hadoop-annotations-2.6.0.jar;D:\Java\maven\com\google\guava\guava\11.0.2\guava-11.0.2.jar;D:\Java\maven\com\google\code\findbugs\jsr305\1.3.9\jsr305-1.3.9.jar;D:\Java\maven\commons-cli\commons-cli\1.2\commons-cli-1.2.jar;D:\Java\maven\org\apache\commons\commons-math3\3.1.1\commons-math3-3.1.1.jar;D:\Java\maven\xmlenc\xmlenc\0.52\xmlenc-0.52.jar;D:\Java\maven\commons-httpclient\commons-httpclient\3.1\commons-httpclient-3.1.jar;D:\Java\maven\commons-logging\commons-logging\1.1.3\commons-logging-1.1.3.jar;D:\Java\maven\commons-codec\commons-codec\1.4\commons-codec-1.4.jar;D:\Java\maven\commons-io\commons-io\2.4\commons-io-2.4.jar;D:\Java\maven\commons-net\commons-net\3.1\commons-net-3.1.jar;D:\Java\maven\commons-collections\commons-collections\3.2.1\commons-collections-3.2.1.jar;D:\Java\maven\log4j\log4j\1.2.17\log4j-1.2.17.jar;D:\Java\maven\commons-lang\commons-lang\2.6\commons-lang-2.6.jar;D:\Java\maven\commons-configuration\commons-configuration\1.6\commons-configuration-1.6.jar;D:\Java\maven\commons-digester\commons-digester\1.8\commons-digester-1.8.jar;D:\Java\maven\commons-beanutils\commons-beanutils\1.7.0\commons-beanutils-1.7.0.jar;D:\Java\maven\commons-beanutils\commons-beanutils-core\1.8.0\commons-beanutils-core-1.8.0.jar;D:\Java\maven\org\slf4j\slf4j-api\1.7.5\slf4j-api-1.7.5.jar;D:\Java\maven\org\slf4j\slf4j-log4j12\1.7.5\slf4j-log4j12-1.7.5.jar;D:\Java\maven\org\codehaus\jackson\jackson-core-asl\1.9.13\jackson-core-asl-1.9.13.jar;D:\Java\maven\org\codehaus\jackson\jackson-mapper-asl\1.9.13\jackson-mapper-asl-1.9.13.jar;D:\Java\maven\org\apache\avro\avro\1.7.4\avro-1.7.4.jar;D:\Java\maven\com\thoughtworks\paranamer\paranamer\2.3\paranamer-2.3.jar;D:\Java\maven\org\xerial\snappy\snappy-java\1.0.4.1\snappy-java-1.0.4.1.jar;D:\Java\maven\org\apache\commons\commons-compress\1.4.1\commons-compress-1.4.1.jar;D:\Java\maven\com\google\protobuf\protobuf-java\2.5.0\protobuf-java-2.5.0.jar;D:\Java\maven\com\google\code\gson\gson\2.2.4\gson-2.2.4.jar;D:\Java\maven\org\apache\hadoop\hadoop-auth\2.6.0\hadoop-auth-2.6.0.jar;D:\Java\maven\org\apache\httpcomponents\httpclient\4.2.5\httpclient-4.2.5.jar;D:\Java\maven\org\apache\httpcomponents\httpcore\4.2.4\httpcore-4.2.4.jar;D:\Java\maven\org\apache\directory\server\apacheds-kerberos-codec\2.0.0-M15\apacheds-kerberos-codec-2.0.0-M15.jar;D:\Java\maven\org\apache\directory\server\apacheds-i18n\2.0.0-M15\apacheds-i18n-2.0.0-M15.jar;D:\Java\maven\org\apache\directory\api\api-asn1-api\1.0.0-M20\api-asn1-api-1.0.0-M20.jar;D:\Java\maven\org\apache\directory\api\api-util\1.0.0-M20\api-util-1.0.0-M20.jar;D:\Java\maven\org\apache\zookeeper\zookeeper\3.4.6\zookeeper-3.4.6.jar;D:\Java\maven\org\apache\curator\curator-framework\2.6.0\curator-framework-2.6.0.jar;D:\Java\maven\org\apache\curator\curator-client\2.6.0\curator-client-2.6.0.jar;D:\Java\maven\org\apache\curator\curator-recipes\2.6.0\curator-recipes-2.6.0.jar;D:\Java\maven\org\htrace\htrace-core\3.0.4\htrace-core-3.0.4.jar;D:\Java\maven\io\netty\netty\3.6.2.Final\netty-3.6.2.Final.jar;D:\Java\maven\org\tukaani\xz\1.0\xz-1.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-hdfs\2.6.0\hadoop-hdfs-2.6.0.jar;D:\Java\maven\org\mortbay\jetty\jetty-util\6.1.26\jetty-util-6.1.26.jar;D:\Java\maven\xerces\xercesImpl\2.9.1\xercesImpl-2.9.1.jar;D:\Java\maven\xml-apis\xml-apis\1.3.04\xml-apis-1.3.04.jar;D:\Java\maven\org\apache\hadoop\hadoop-mapreduce-client-app\2.6.0\hadoop-mapreduce-client-app-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-mapreduce-client-common\2.6.0\hadoop-mapreduce-client-common-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-yarn-common\2.6.0\hadoop-yarn-common-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-yarn-client\2.6.0\hadoop-yarn-client-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-yarn-api\2.6.0\hadoop-yarn-api-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-mapreduce-client-core\2.6.0\hadoop-mapreduce-client-core-2.6.0.jar;D:\Java\maven\org\apache\hadoop\hadoop-yarn-server-common\2.6.0\hadoop-yarn-server-common-2.6.0.jar;D:\Java\maven\org\fusesource\leveldbjni\leveldbjni-all\1.8\leveldbjni-all-1.8.jar;D:\Java\maven\org\apache\hadoop\hadoop-mapreduce-client-shuffle\2.6.0\hadoop-mapreduce-client-shuffle-2.6.0.jar;D:\Java\maven\javax\xml\bind\jaxb-api\2.2.2\jaxb-api-2.2.2.jar;D:\Java\maven\javax\xml\stream\stax-api\1.0-2\stax-api-1.0-2.jar;D:\Java\maven\javax\activation\activation\1.1\activation-1.1.jar;D:\Java\maven\javax\servlet\servlet-api\2.5\servlet-api-2.5.jar;D:\Java\maven\com\sun\jersey\jersey-core\1.9\jersey-core-1.9.jar;D:\Java\maven\com\sun\jersey\jersey-client\1.9\jersey-client-1.9.jar;D:\Java\maven\org\codehaus\jackson\jackson-jaxrs\1.9.13\jackson-jaxrs-1.9.13.jar;D:\Java\maven\org\codehaus\jackson\jackson-xc\1.9.13\jackson-xc-1.9.13.jar;D:\Java\maven\org\apache\hadoop\hadoop-mapreduce-client-jobclient\2.6.0\hadoop-mapreduce-client-jobclient-2.6.0.jar;C:\Program Files (x86)\JetBrains\IntelliJ IDEA 2016.2\lib\idea_rt.jar" com.intellij.rt.execution.application.AppMain com.rihai.hadoop.job.MrDebug E:\hello.txt E:\output\
begin
E:\output\ deleted.
10:51:04,371 | INFO | main | deprecation | apache.hadoop.conf.Configuration 1049 | session.id is deprecated. Instead, use dfs.metrics.session-id
10:51:04,371 | INFO | main | JvmMetrics | he.hadoop.metrics.jvm.JvmMetrics 76 | Initializing JVM Metrics with processName=JobTracker, sessionId=
10:51:04,465 | WARN | main | JobSubmitter | he.hadoop.mapreduce.JobSubmitter 261 | No job jar file set. User classes may not be found. See Job or Job#setJar(String).
10:51:04,481 | INFO | main | FileInputFormat | reduce.lib.input.FileInputFormat 281 | Total input paths to process : 1
10:51:04,496 | INFO | main | JobSubmitter | he.hadoop.mapreduce.JobSubmitter 494 | number of splits:1
10:51:04,559 | INFO | main | JobSubmitter | he.hadoop.mapreduce.JobSubmitter 583 | Submitting tokens for job: job_local742415139_0001
10:51:04,668 | INFO | main | Job | org.apache.hadoop.mapreduce.Job 1300 | The url to track the job: http://localhost:8080/
10:51:04,668 | INFO | main | Job | org.apache.hadoop.mapreduce.Job 1345 | Running job: job_local742415139_0001
10:51:04,668 | INFO | Thread-3 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 471 | OutputCommitter set in config null
10:51:04,668 | INFO | Thread-3 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 489 | OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
10:51:04,699 | INFO | Thread-3 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 448 | Waiting for map tasks
10:51:04,699 | INFO | Task Executor #0 | LocalJobRunner | calJobRunner$Job$MapTaskRunnable 224 | Starting task: attempt_local742415139_0001_m_000000_0
10:51:04,715 | INFO | Task Executor #0 | ProcfsBasedProcessTree | yarn.util.ProcfsBasedProcessTree 181 | ProcfsBasedProcessTree currently is supported only on Linux.
10:51:04,731 | INFO | Task Executor #0 | Task | org.apache.hadoop.mapred.Task 587 | Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@102ead5
10:51:04,746 | INFO | Task Executor #0 | MapTask | org.apache.hadoop.mapred.MapTask 753 | Processing split: file:/E:/hello.txt:0+43
10:51:04,809 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 1202 | (EQUATOR) 0 kvi 26214396(104857584)
10:51:04,809 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 995 | mapreduce.task.io.sort.mb: 100
10:51:04,809 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 996 | soft limit at 83886080
10:51:04,809 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 997 | bufstart = 0; bufvoid = 104857600
10:51:04,809 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 998 | kvstart = 26214396; length = 6553600
10:51:04,824 | INFO | Task Executor #0 | MapTask | org.apache.hadoop.mapred.MapTask 402 | Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
10:51:04,824 | INFO | Task Executor #0 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 591 |
10:51:04,824 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 1457 | Starting flush of map output
10:51:04,824 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 1475 | Spilling map output
10:51:04,824 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 1476 | bufstart = 0; bufend = 70; bufvoid = 104857600
10:51:04,824 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 1478 | kvstart = 26214396(104857584); kvend = 26214372(104857488); length = 25/6553600
10:51:04,856 | INFO | Task Executor #0 | MapTask | p.mapred.MapTask$MapOutputBuffer 1660 | Finished spill 0
10:51:04,856 | INFO | Task Executor #0 | Task | org.apache.hadoop.mapred.Task 1001 | Task:attempt_local742415139_0001_m_000000_0 is done. And is in the process of committing
10:51:04,856 | INFO | Task Executor #0 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 591 | map
10:51:04,856 | INFO | Task Executor #0 | Task | org.apache.hadoop.mapred.Task 1121 | Task 'attempt_local742415139_0001_m_000000_0' done.
10:51:04,856 | INFO | Task Executor #0 | LocalJobRunner | calJobRunner$Job$MapTaskRunnable 249 | Finishing task: attempt_local742415139_0001_m_000000_0
10:51:04,856 | INFO | Thread-3 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 456 | map task executor complete.
10:51:04,856 | INFO | Thread-3 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 448 | Waiting for reduce tasks
10:51:04,856 | INFO | pool-3-thread-1 | LocalJobRunner | JobRunner$Job$ReduceTaskRunnable 302 | Starting task: attempt_local742415139_0001_r_000000_0
10:51:04,871 | INFO | pool-3-thread-1 | ProcfsBasedProcessTree | yarn.util.ProcfsBasedProcessTree 181 | ProcfsBasedProcessTree currently is supported only on Linux.
10:51:04,887 | INFO | pool-3-thread-1 | Task | org.apache.hadoop.mapred.Task 587 | Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@1b77ca0
10:51:04,887 | INFO | pool-3-thread-1 | ReduceTask | .apache.hadoop.mapred.ReduceTask 362 | Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@1c17ff9
10:51:04,903 | INFO | pool-3-thread-1 | MergeManagerImpl | uce.task.reduce.MergeManagerImpl 196 | MergerManager: memoryLimit=181665792, maxSingleShuffleLimit=45416448, mergeThreshold=119899424, ioSortFactor=10, memToMemMergeOutputsThreshold=10
10:51:04,903 | INFO | ompletion Events | EventFetcher | preduce.task.reduce.EventFetcher 61 | attempt_local742415139_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
10:51:04,918 | INFO | localfetcher#1 | LocalFetcher | preduce.task.reduce.LocalFetcher 141 | localfetcher#1 about to shuffle output of map attempt_local742415139_0001_m_000000_0 decomp: 38 len: 42 to MEMORY
10:51:04,934 | INFO | localfetcher#1 | InMemoryMapOutput | ce.task.reduce.InMemoryMapOutput 100 | Read 38 bytes from map-output for attempt_local742415139_0001_m_000000_0
10:51:04,949 | INFO | localfetcher#1 | MergeManagerImpl | uce.task.reduce.MergeManagerImpl 314 | closeInMemoryFile -> map-output of size: 38, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->38
10:51:04,949 | INFO | ompletion Events | EventFetcher | preduce.task.reduce.EventFetcher 76 | EventFetcher is interrupted.. Returning
10:51:04,949 | INFO | pool-3-thread-1 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 591 | 1 / 1 copied.
10:51:04,949 | INFO | pool-3-thread-1 | MergeManagerImpl | uce.task.reduce.MergeManagerImpl 674 | finalMerge called with 1 in-memory map-outputs and 0 on-disk map-outputs
10:51:04,949 | INFO | pool-3-thread-1 | Merger | .hadoop.mapred.Merger$MergeQueue 597 | Merging 1 sorted segments
10:51:04,949 | INFO | pool-3-thread-1 | Merger | .hadoop.mapred.Merger$MergeQueue 696 | Down to the last merge-pass, with 1 segments left of total size: 30 bytes
10:51:04,965 | INFO | pool-3-thread-1 | MergeManagerImpl | uce.task.reduce.MergeManagerImpl 751 | Merged 1 segments, 38 bytes to disk to satisfy reduce memory limit
10:51:04,965 | INFO | pool-3-thread-1 | MergeManagerImpl | uce.task.reduce.MergeManagerImpl 781 | Merging 1 files, 42 bytes from disk
10:51:04,965 | INFO | pool-3-thread-1 | MergeManagerImpl | uce.task.reduce.MergeManagerImpl 796 | Merging 0 segments, 0 bytes from memory into reduce
10:51:04,965 | INFO | pool-3-thread-1 | Merger | .hadoop.mapred.Merger$MergeQueue 597 | Merging 1 sorted segments
10:51:04,965 | INFO | pool-3-thread-1 | Merger | .hadoop.mapred.Merger$MergeQueue 696 | Down to the last merge-pass, with 1 segments left of total size: 30 bytes
10:51:04,965 | INFO | pool-3-thread-1 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 591 | 1 / 1 copied.
10:51:04,965 | INFO | pool-3-thread-1 | deprecation | apache.hadoop.conf.Configuration 1049 | mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
10:51:04,965 | INFO | pool-3-thread-1 | Task | org.apache.hadoop.mapred.Task 1001 | Task:attempt_local742415139_0001_r_000000_0 is done. And is in the process of committing
10:51:04,965 | INFO | pool-3-thread-1 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 591 | 1 / 1 copied.
10:51:04,965 | INFO | pool-3-thread-1 | Task | org.apache.hadoop.mapred.Task 1162 | Task attempt_local742415139_0001_r_000000_0 is allowed to commit now
10:51:04,965 | INFO | pool-3-thread-1 | FileOutputCommitter | e.lib.output.FileOutputCommitter 439 | Saved output of task 'attempt_local742415139_0001_r_000000_0' to file:/E:/output/_temporary/0/task_local742415139_0001_r_000000
10:51:04,965 | INFO | pool-3-thread-1 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 591 | reduce > reduce
10:51:04,965 | INFO | pool-3-thread-1 | Task | org.apache.hadoop.mapred.Task 1121 | Task 'attempt_local742415139_0001_r_000000_0' done.
10:51:04,965 | INFO | pool-3-thread-1 | LocalJobRunner | JobRunner$Job$ReduceTaskRunnable 325 | Finishing task: attempt_local742415139_0001_r_000000_0
10:51:04,965 | INFO | Thread-3 | LocalJobRunner | hadoop.mapred.LocalJobRunner$Job 456 | reduce task executor complete.
10:51:05,668 | INFO | main | Job | org.apache.hadoop.mapreduce.Job 1366 | Job job_local742415139_0001 running in uber mode : false
10:51:05,668 | INFO | main | Job | org.apache.hadoop.mapreduce.Job 1373 | map 100% reduce 100%
10:51:05,668 | INFO | main | Job | org.apache.hadoop.mapreduce.Job 1384 | Job job_local742415139_0001 completed successfully
10:51:05,668 | INFO | main | Job | org.apache.hadoop.mapreduce.Job 1391 | Counters: 33
File System Counters
FILE: Number of bytes read=476
FILE: Number of bytes written=508250
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=3
Map output records=7
Map output bytes=70
Map output materialized bytes=42
Input split bytes=83
Combine input records=7
Combine output records=3
Reduce input groups=3
Reduce shuffle bytes=42
Reduce input records=3
Reduce output records=3
Spilled Records=6
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=28
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
Total committed heap usage (bytes)=242360320
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=43
File Output Format Counters
Bytes Written=36
Process finished with exit code 0
3,本机MapReduce,远程Hdfs目录做数据源
访问的是hdfs的数据,但MapReduce在本机,相当于单机版的MR,这样我信可以直接在Ide 上debug。
代码增加如下配置:
conf.set("fs.default.name", "hdfs://master:9000");
设置运行参数为Hdfs目录:
运行结果:
4,本地提交MapReduce 到远程hadoop 服务器
真正的MapReduce 是一个分布式任务,提交到hadoop集群上,在本地Ide是无法debug的。
代码增加如下配置:
conf.set("fs.default.name", "hdfs://master:9000");
conf.set("mapreduce.framework.name", "yarn");
conf.set("yarn.resourcemanager.hostname", "master");
conf.set("mapreduce.app-submission.cross-platform","true");
conf.set("mapreduce.job.jar", "D:\\IdeaProjects\\hadoop-demo\\job\\target\\demo-job-1.0-SNAPSHOT.jar");
代码还需要指定最终的编译的Jar包地址
运行结果:
查看Job 管理页面 http://192.168.56.101:8088/
可以看到这个Job是直接提交到远程服务器运行的。
总结:弄清楚了上面的各种配置场景之后,我们还可以通过在resource目录增加Hadoop配置文件的方式,来灵活控制我们的调试。
来源:https://www.cnblogs.com/zrhai/p/5863204.html
