hadoop-streaming https://www.e-learn.cn/tag/hadoop-streaming zh-hans Is it possible to compress json in hive external table? https://www.e-learn.cn/topic/4088118 <span>Is it possible to compress json in hive external table?</span> <span><span lang="" about="/user/162" typeof="schema:Person" property="schema:name" datatype="">冷暖自知</span></span> <span>2021-02-10 13:33:16</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I want to know how to compress json data in hive external table. How can it be done? I have created external table like this:</p> <pre><code> CREATE EXTERNAL TABLE tweets ( id BIGINT,created_at STRING,source STRING,favorited BOOLEAN )ROW FORMAT SERDE "com.cloudera.hive.serde.JSONSerDe" LOCATION "/user/cloudera/tweets"; </code></pre> <p>and I had set the compression properties </p> <pre><code>set mapred.output.compress=true; set hive.exec.compress.output=true; set mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec; set io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec; </code></pre> <p>input file : test</p> <pre><code>{ "id": 596344698102419451, "created_at": "MonApr0101: 32: 06+00002013", "source": "blank", "favorited": false } </code></pre> <p>after that i have load my json file into hdfs location <code>"/user/cloudera/tweets".</code></p> <p>but it is not compressed.</p> <p>Can you please let me know how to do compression in hive external table ? Can someone help me to compress in hive external table?</p> <p>Thanks in advance. </p> <br /><h3>回答1:</h3><br /><p>Just gzip your files and put them as is (*.gz) into the table location </p> <br /><br /><br /><h3>回答2:</h3><br /><p>Are you need uncompress before you will select it like json. You can't use both serde (json and gzip)</p> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/37654258/is-it-possible-to-compress-json-in-hive-external-table</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/hadoop" hreflang="zh-hans">Hadoop</a></div> <div class="field--item"><a href="/tag/hive" hreflang="zh-hans">Hive</a></div> <div class="field--item"><a href="/tag/cloudera" hreflang="zh-hans">Cloudera</a></div> <div class="field--item"><a href="/tag/hiveql" hreflang="zh-hans">hiveql</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> </div> </div> Wed, 10 Feb 2021 05:33:16 +0000 冷暖自知 4088118 at https://www.e-learn.cn Hadoop: Error: java.lang.RuntimeException: Error in configuring object https://www.e-learn.cn/topic/4080259 <span>Hadoop: Error: java.lang.RuntimeException: Error in configuring object</span> <span><span lang="" about="/user/101" typeof="schema:Person" property="schema:name" datatype="">老子叫甜甜</span></span> <span>2021-02-08 13:02:55</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I have Hadoop installed and working perfectly because I run the word count example and it works great. Now I tried to move forward and do some more real examples. My example is done in this website as Example 2 (Average Salaries by each department) . I am using the same code from the website and this data</p> <p><strong>mapper.py</strong></p> <pre><code>#!usr/bin/Python # mapper.py import csv import sys reader = csv.reader(sys.stdin, delimiter=',') writer = csv.writer(sys.stdout, delimiter='\t') for row in reader: agency = row[3] annualSalary = row[5][1:].strip() print '{0}\t{1}'.format(agency, annualSalary) </code></pre> <p><strong>reducer.py</strong></p> <pre><code>#!usr/bin/Python # reducer.py import csv import sys agency_salary_sum = 0 current_agency = None n_occurences = 0 for row in sys.stdin: data_mapped = row.strip().split("\t") if len(data_mapped) != 2: # Something has gone wrong. Skip this line. continue agency, salary = data_mapped try: salary = float(salary) except: continue if agency == current_agency: agency_salary_sum += salary n_occurences += 1 else: if current_agency: print '{0}\t{1}'.format(current_agency, agency_salary_sum/n_occurences) n_occurences = 0 current_agency = agency agency_salary_sum = salary if current_agency == agency: print '{0}\t{1}'.format(current_agency, agency_salary_sum / n_occurences) </code></pre> <p>Following the command I used to run my job</p> <pre><code>$ bin/hadoop jar python/hadoop-streaming-2.7.0.jar -file python/salaries/mapper.py -mapper python/salaries/mapper.py -file python/salaries/reducer.py -reducer python/salaries/reducer.py -input new-input/ -output output </code></pre> <p>Following is the trace I am getting</p> <pre><code>18/07/02 12:20:03 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. packageJobJar: [python/salaries/mapper.py, python/salaries/reducer.py, /tmp/hadoop-unjar4765938201803407949/] [] /tmp/streamjob3060992493780265460.jar tmpDir=null 18/07/02 12:20:05 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/07/02 12:20:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/07/02 12:20:07 INFO mapred.FileInputFormat: Total input paths to process : 2 18/07/02 12:20:07 INFO mapreduce.JobSubmitter: number of splits:2 18/07/02 12:20:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1530451183103_0017 18/07/02 12:20:07 INFO impl.YarnClientImpl: Submitted application application_1530451183103_0017 18/07/02 12:20:07 INFO mapreduce.Job: The url to track the job: http://98f81ca7cf43:8088/proxy/application_1530451183103_0017/ 18/07/02 12:20:07 INFO mapreduce.Job: Running job: job_1530451183103_0017 18/07/02 12:20:16 INFO mapreduce.Job: Job job_1530451183103_0017 running in uber mode : false 18/07/02 12:20:16 INFO mapreduce.Job: map 0% reduce 0% 18/07/02 12:20:28 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000001_0, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000003/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:28 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000000_0, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000002/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:40 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000001_1, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000004/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:41 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000000_1, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000005/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:53 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000001_2, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000007/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:54 INFO mapreduce.Job: map 50% reduce 0% 18/07/02 12:20:54 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000000_2, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000008/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:55 INFO mapreduce.Job: map 0% reduce 0% 18/07/02 12:21:05 INFO mapreduce.Job: map 100% reduce 100% 18/07/02 12:21:05 INFO mapreduce.Job: Job job_1530451183103_0017 failed with state FAILED due to: Task failed task_1530451183103_0017_m_000001 Job failed as tasks failed. failedMaps:1 failedReduces:0 18/07/02 12:21:05 INFO mapreduce.Job: Counters: 13 Job Counters Failed map tasks=7 Killed map tasks=1 Launched map tasks=8 Other local map tasks=6 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=81502 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=81502 Total vcore-seconds taken by all map tasks=81502 Total megabyte-seconds taken by all map tasks=83458048 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 18/07/02 12:21:05 ERROR streaming.StreamJob: Job not successful! Streaming Command Failed! </code></pre> <p>At the end it also creates an empty output folder. I really have no idea what is going wrong here. Is my approach wrong or some configuration problem? Any help in moving even slightly forward will be really appreciated.</p> <br /><h3>回答1:</h3><br /><p>Can you try running the below command:</p> <pre><code>bin/hadoop jar python/hadoop-streaming-2.7.0.jar -mapper python/salaries/mapper.py -reducer python/salaries/reducer.py -input new-input/* -output output </code></pre> <p>I <strong>removed</strong> the '-file python/salaries/mapper.py' , '-file python/salaries/reducer.py' Options</p> <br /><br /><br /><h3>回答2:</h3><br /><p>This might be not the case, but I believe this line is incorrect. </p> <p><code>#!usr/bin/Python</code></p> <p>It should point to correct python interpreter i.e <code>#!/usr/bin/python</code>. </p> <p>If it's not correctly set, then for hadoop streaming library you have to specify python interpreter in <code>-mapper</code> or <code>-reducer</code> parameter i.e. <code>-mapper python python/salaries/mapper.py</code></p> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/51140411/hadoop-error-java-lang-runtimeexception-error-in-configuring-object</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/python" hreflang="zh-hans">python</a></div> <div class="field--item"><a href="/tag/hadoop" hreflang="zh-hans">Hadoop</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> </div> </div> Mon, 08 Feb 2021 05:02:55 +0000 老子叫甜甜 4080259 at https://www.e-learn.cn Hadoop: Error: java.lang.RuntimeException: Error in configuring object https://www.e-learn.cn/topic/4080256 <span>Hadoop: Error: java.lang.RuntimeException: Error in configuring object</span> <span><span lang="" about="/user/214" typeof="schema:Person" property="schema:name" datatype="">半腔热情</span></span> <span>2021-02-08 13:02:18</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I have Hadoop installed and working perfectly because I run the word count example and it works great. Now I tried to move forward and do some more real examples. My example is done in this website as Example 2 (Average Salaries by each department) . I am using the same code from the website and this data</p> <p><strong>mapper.py</strong></p> <pre><code>#!usr/bin/Python # mapper.py import csv import sys reader = csv.reader(sys.stdin, delimiter=',') writer = csv.writer(sys.stdout, delimiter='\t') for row in reader: agency = row[3] annualSalary = row[5][1:].strip() print '{0}\t{1}'.format(agency, annualSalary) </code></pre> <p><strong>reducer.py</strong></p> <pre><code>#!usr/bin/Python # reducer.py import csv import sys agency_salary_sum = 0 current_agency = None n_occurences = 0 for row in sys.stdin: data_mapped = row.strip().split("\t") if len(data_mapped) != 2: # Something has gone wrong. Skip this line. continue agency, salary = data_mapped try: salary = float(salary) except: continue if agency == current_agency: agency_salary_sum += salary n_occurences += 1 else: if current_agency: print '{0}\t{1}'.format(current_agency, agency_salary_sum/n_occurences) n_occurences = 0 current_agency = agency agency_salary_sum = salary if current_agency == agency: print '{0}\t{1}'.format(current_agency, agency_salary_sum / n_occurences) </code></pre> <p>Following the command I used to run my job</p> <pre><code>$ bin/hadoop jar python/hadoop-streaming-2.7.0.jar -file python/salaries/mapper.py -mapper python/salaries/mapper.py -file python/salaries/reducer.py -reducer python/salaries/reducer.py -input new-input/ -output output </code></pre> <p>Following is the trace I am getting</p> <pre><code>18/07/02 12:20:03 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. packageJobJar: [python/salaries/mapper.py, python/salaries/reducer.py, /tmp/hadoop-unjar4765938201803407949/] [] /tmp/streamjob3060992493780265460.jar tmpDir=null 18/07/02 12:20:05 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/07/02 12:20:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/07/02 12:20:07 INFO mapred.FileInputFormat: Total input paths to process : 2 18/07/02 12:20:07 INFO mapreduce.JobSubmitter: number of splits:2 18/07/02 12:20:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1530451183103_0017 18/07/02 12:20:07 INFO impl.YarnClientImpl: Submitted application application_1530451183103_0017 18/07/02 12:20:07 INFO mapreduce.Job: The url to track the job: http://98f81ca7cf43:8088/proxy/application_1530451183103_0017/ 18/07/02 12:20:07 INFO mapreduce.Job: Running job: job_1530451183103_0017 18/07/02 12:20:16 INFO mapreduce.Job: Job job_1530451183103_0017 running in uber mode : false 18/07/02 12:20:16 INFO mapreduce.Job: map 0% reduce 0% 18/07/02 12:20:28 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000001_0, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000003/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:28 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000000_0, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000002/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:40 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000001_1, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000004/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:41 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000000_1, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000005/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:53 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000001_2, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000007/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:54 INFO mapreduce.Job: map 50% reduce 0% 18/07/02 12:20:54 INFO mapreduce.Job: Task Id : attempt_1530451183103_0017_m_000000_2, Status : FAILED Error: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:112) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:78) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:222) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1530451183103_0017/container_1530451183103_0017_01_000008/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:209) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 18/07/02 12:20:55 INFO mapreduce.Job: map 0% reduce 0% 18/07/02 12:21:05 INFO mapreduce.Job: map 100% reduce 100% 18/07/02 12:21:05 INFO mapreduce.Job: Job job_1530451183103_0017 failed with state FAILED due to: Task failed task_1530451183103_0017_m_000001 Job failed as tasks failed. failedMaps:1 failedReduces:0 18/07/02 12:21:05 INFO mapreduce.Job: Counters: 13 Job Counters Failed map tasks=7 Killed map tasks=1 Launched map tasks=8 Other local map tasks=6 Data-local map tasks=2 Total time spent by all maps in occupied slots (ms)=81502 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=81502 Total vcore-seconds taken by all map tasks=81502 Total megabyte-seconds taken by all map tasks=83458048 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 18/07/02 12:21:05 ERROR streaming.StreamJob: Job not successful! Streaming Command Failed! </code></pre> <p>At the end it also creates an empty output folder. I really have no idea what is going wrong here. Is my approach wrong or some configuration problem? Any help in moving even slightly forward will be really appreciated.</p> <br /><h3>回答1:</h3><br /><p>Can you try running the below command:</p> <pre><code>bin/hadoop jar python/hadoop-streaming-2.7.0.jar -mapper python/salaries/mapper.py -reducer python/salaries/reducer.py -input new-input/* -output output </code></pre> <p>I <strong>removed</strong> the '-file python/salaries/mapper.py' , '-file python/salaries/reducer.py' Options</p> <br /><br /><br /><h3>回答2:</h3><br /><p>This might be not the case, but I believe this line is incorrect. </p> <p><code>#!usr/bin/Python</code></p> <p>It should point to correct python interpreter i.e <code>#!/usr/bin/python</code>. </p> <p>If it's not correctly set, then for hadoop streaming library you have to specify python interpreter in <code>-mapper</code> or <code>-reducer</code> parameter i.e. <code>-mapper python python/salaries/mapper.py</code></p> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/51140411/hadoop-error-java-lang-runtimeexception-error-in-configuring-object</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/python" hreflang="zh-hans">python</a></div> <div class="field--item"><a href="/tag/hadoop" hreflang="zh-hans">Hadoop</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> </div> </div> Mon, 08 Feb 2021 05:02:18 +0000 半腔热情 4080256 at https://www.e-learn.cn How to getting latest partition data from hive https://www.e-learn.cn/topic/3836550 <span>How to getting latest partition data from hive</span> <span><span lang="" about="/user/130" typeof="schema:Person" property="schema:name" datatype="">半城伤御伤魂</span></span> <span>2020-10-01 06:29:26</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>来源:<code>https://stackoverflow.com/questions/60829198/how-to-getting-latest-partition-data-from-hive</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/hive" hreflang="zh-hans">Hive</a></div> <div class="field--item"><a href="/tag/hiveql" hreflang="zh-hans">hiveql</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> <div class="field--item"><a href="/tag/hive-partitions" hreflang="zh-hans">hive-partitions</a></div> </div> </div> Wed, 30 Sep 2020 22:29:26 +0000 半城伤御伤魂 3836550 at https://www.e-learn.cn Error when running python map reduce job using Hadoop streaming in Google Cloud Dataproc environment https://www.e-learn.cn/topic/3680925 <span>Error when running python map reduce job using Hadoop streaming in Google Cloud Dataproc environment</span> <span><span lang="" about="/user/160" typeof="schema:Person" property="schema:name" datatype="">强颜欢笑</span></span> <span>2020-07-05 04:55:34</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I want to run python map reduce job in Google Cloud Dataproc using hadoop streaming method. My map reduce python script, input file and job result output are located in Google Cloud Storage.</p> <p>I tried to run this command </p> <pre><code>hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -file gs://bucket-name/intro_to_mapreduce/mapper_prod_cat.py -mapper gs://bucket-name/intro_to_mapreduce/mapper_prod_cat.py -file gs://bucket-name/intro_to_mapreduce/reducer_prod_cat.py -reducer gs://bucket-name/intro_to_mapreduce/reducer_prod_cat.py -input gs://bucket-name/intro_to_mapreduce/purchases.txt -output gs://bucket-name/intro_to_mapreduce/output_prod_cat </code></pre> <p>But I got this error output : </p> <blockquote> <p>File: /home/ramaadhitia/gs:/bucket-name/intro_to_mapreduce/mapper_prod_cat.py does not exist, or is not readable.</p> <p>Try -help for more information Streaming Command Failed!</p> </blockquote> <p>Is cloud connector not working in hadoop streaming? Is there any other way to run python map reduce job using hadoop streaming with python script and input file located in Google Cloud Storage ?</p> <p>Thank You</p> <br /><h3>回答1:</h3><br /><p>The <code>-file</code> option from hadoop-streaming only works for local files. Note however, that its help text mentions that the <code>-file</code> flag is deprecated in favor of the generic <code>-files</code> option. Using the generic <code>-files</code> option allows us to specify a remote (hdfs / gs) file to stage. Note also that generic options must precede application specific flags.</p> <p>Your invocation would become:</p> <pre><code>hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \ -files gs://bucket-name/intro_to_mapreduce/mapper_prod_cat.py,gs://bucket-name/intro_to_mapreduce/reducer_prod_cat.py \ -mapper mapper_prod_cat.py \ -reducer reducer_prod_cat.py \ -input gs://bucket-name/intro_to_mapreduce/purchases.txt \ -output gs://bucket-name/intro_to_mapreduce/output_prod_cat </code></pre> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/48003377/error-when-running-python-map-reduce-job-using-hadoop-streaming-in-google-cloud</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/hadoop" hreflang="zh-hans">Hadoop</a></div> <div class="field--item"><a href="/tag/google-cloud-platform" hreflang="zh-hans">google-cloud-platform</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> <div class="field--item"><a href="/tag/google-cloud-dataproc" hreflang="zh-hans">google-cloud-dataproc</a></div> </div> </div> Sat, 04 Jul 2020 20:55:34 +0000 强颜欢笑 3680925 at https://www.e-learn.cn Hadoop streaming “GC overhead limit exceeded” https://www.e-learn.cn/topic/3296679 <span>Hadoop streaming “GC overhead limit exceeded”</span> <span><span lang="" about="/user/172" typeof="schema:Person" property="schema:name" datatype="">无人久伴</span></span> <span>2020-01-24 12:20:08</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I am running this command:</p> <pre><code>hadoop jar hadoop-streaming.jar -D stream.tmpdir=/tmp -input "&lt;input dir&gt;" -output "&lt;output dir&gt;" -mapper "grep 20151026" -reducer "wc -l" </code></pre> <p>Where <code>&lt;input dir&gt;</code> is a directory with many <code>avro</code> files.</p> <p>And getting this error:</p> <blockquote> <p>Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at org.apache.hadoop.hdfs.protocol.DatanodeID.updateXferAddrAndInvalidateHashCode(DatanodeID.java:287) at org.apache.hadoop.hdfs.protocol.DatanodeID.(DatanodeID.java:91) at org.apache.hadoop.hdfs.protocol.DatanodeInfo.(DatanodeInfo.java:136) at org.apache.hadoop.hdfs.protocol.DatanodeInfo.(DatanodeInfo.java:122) at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:633) at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:793) at org.apache.hadoop.hdfs.protocolPB.PBHelper.convertLocatedBlock(PBHelper.java:1252) at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1270) at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1413) at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1524) at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1533) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:557) at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy15.getListing(Unknown Source) at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1969) at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.hasNextNoFilter(DistributedFileSystem.java:888) at org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.hasNext(DistributedFileSystem.java:863) at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:267) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:624) at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:616) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415)</p> </blockquote> <p>How can this issue be resolved ?</p> <br /><h3>回答1:</h3><br /><p>It took a while, but I found the solution here.</p> <p>Prepending <code>HADOOP_CLIENT_OPTS="-Xmx1024M"</code> to the command solves the problem.</p> <p>The final commandline is:</p> <pre><code>HADOOP_CLIENT_OPTS="-Xmx1024M" hadoop jar hadoop-streaming.jar -D stream.tmpdir=/tmp -input "&lt;input dir&gt;" -output "&lt;output dir&gt;" -mapper "grep 20151026" -reducer "wc -l" </code></pre> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/33341515/hadoop-streaming-gc-overhead-limit-exceeded</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/hadoop" hreflang="zh-hans">Hadoop</a></div> <div class="field--item"><a href="/tag/out-memory" hreflang="zh-hans">out-of-memory</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> </div> </div> Fri, 24 Jan 2020 04:20:08 +0000 无人久伴 3296679 at https://www.e-learn.cn Tool/Ways to schedule Amazon's Elastic MapReduce jobs https://www.e-learn.cn/topic/3295816 <span>Tool/Ways to schedule Amazon&#039;s Elastic MapReduce jobs</span> <span><span lang="" about="/user/17" typeof="schema:Person" property="schema:name" datatype="">…衆ロ難τιáo~</span></span> <span>2020-01-24 10:26:12</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I use EMR to create new instances and process the jobs and then shutdown instances.</p> <p>My requirement is to schedule jobs in periodic fashion. One of the easy implementation can be to use quartz to trigger EMR jobs. But looking at longer run I am interested in using out of box mapreduce scheduling solution. My question is that is there any out of box scheduling feature provided by EMR or AWS-SDK, which i can use for my requirement? I can see there is scheduling in Auto scaling, but i want to schedule EMR jobflow instead.</p> <br /><h3>回答1:</h3><br /><p>There is Apache Oozie Workflow Scheduler for Hadoop to do just that.</p> <blockquote> <p>Oozie is a workflow scheduler system to manage Apache Hadoop jobs.</p> <p>Oozie Workflow jobs are Directed Acyclical Graphs (DAGs) of actions.</p> <p>Oozie Coordinator jobs are recurrent Oozie Workflow jobs triggered by time (frequency) and data availabilty.</p> <p>Oozie is integrated with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (such as Java map-reduce, Streaming map-reduce, Pig, Hive, Sqoop and Distcp) as well as system specific jobs (such as Java programs and shell scripts).</p> <p>Oozie is a scalable, reliable and extensible system.</p> </blockquote> <p>Here is a simple example of Elastic Map Reduce bootstrap actions for configuring apache oozie : https://github.com/lila/emr-oozie-sample</p> <p>But to let you know oozie is a bit complicated and if and only if you have a lot of jobs to be scheduled/monitored/maintained then only you shall go for <code>oozie</code> or else just create a bunch of <code>cron</code> jobs if you have say just 2 or 3 jobs to be scheduled periodically.</p> <p>You may also look into and explore simple workflow from Amazon.</p> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/14014486/tool-ways-to-schedule-amazons-elastic-mapreduce-jobs</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/mapreduce" hreflang="zh-hans">MapReduce</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> <div class="field--item"><a href="/tag/elastic-map-reduce" hreflang="zh-hans">elastic-map-reduce</a></div> <div class="field--item"><a href="/tag/emr" hreflang="zh-hans">emr</a></div> </div> </div> Fri, 24 Jan 2020 02:26:12 +0000 …衆ロ難τιáo~ 3295816 at https://www.e-learn.cn hadoop streaming: where are application logs? https://www.e-learn.cn/topic/3251190 <span>hadoop streaming: where are application logs?</span> <span><span lang="" about="/user/217" typeof="schema:Person" property="schema:name" datatype="">随声附和</span></span> <span>2020-01-17 14:06:50</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>My question is similar to : hadoop streaming: how to see application logs? (The link in the answer is not currently working. So I have to post it again with an additional question)</p> <p>I can see all hadoop logs on my /usr/local/hadoop/logs path</p> <p>but where can I see application level logs? for example :</p> <p>reducer.py -</p> <pre><code>import logging .... logging.basicConfig(level=logging.ERROR, format='MAP %(asctime)s%(levelname)s%(message)s') logging.error('Test!') ... </code></pre> <p>I am not able to see any of the logs (WARNING,ERROR) in stderr.</p> <p>Where I can find my log statements of the application? I am using Python and using hadoop-streaming. </p> <p>Additional question : </p> <p>If I want to use a file to store/aggregate my application logs like : </p> <p>reducer.py -</p> <pre><code>.... logger = logging.getLogger('test') hdlr = logging.FileHandler(os.environ['HOME']+'/test.log') formatter = logging.Formatter('MAP %(asctime)s %(levelname)s %(message)s') hdlr.setFormatter(formatter) logger.addHandler(hdlr) logger.setLevel(logging.ERROR) logger.error('please work!!') ..... </code></pre> <p>(Assuming that I have test.log in $HOME location of master &amp; all slaves in my hadoop cluster). Can I achieve this in a distributed environment like Hadoop? If so, how can achieve this? </p> <p>I tried this and ran a sample streaming job, but to only see the below error : </p> <pre><code>Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:330) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:543) at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:134) at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:237) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:484) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:397) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170) Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 </code></pre> <p>Please help me understand how logging can be achieved in hadoop streaming jobs.</p> <p>Thank you</p> <br /><h3>回答1:</h3><br /><p>Try this HDFS path: /yarn/apps/&amp;{user_name}/logs/application_${appid}/</p> <p>in general:</p> <blockquote> <p>Where to store container logs. An application's localized log directory will be found in ${yarn.nodemanager.log-dirs}/application_${appid}. Individual containers' log directories will be below this, in directories named container_{$contid}. Each container directory will contain the files stderr, stdin, and syslog generated by that container.</p> </blockquote> <p>If you print to stderr you'll find it in files under this dir I mentioned above. There should be one file per one node.</p> <br /><br /><br /><h3>回答2:</h3><br /><p>You must be aware that Hadoop-streaming uses stdout to pipe data from mappers to reducers. So if your logging system writes in stdout, you will be in trouble, since it will very likely break your logic and your job. One way to log is to write in stderr, thus you will see your logs in errors logs. </p> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/30586619/hadoop-streaming-where-are-application-logs</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/python" hreflang="zh-hans">python</a></div> <div class="field--item"><a href="/tag/hadoop" hreflang="zh-hans">Hadoop</a></div> <div class="field--item"><a href="/tag/logging" hreflang="zh-hans">logging</a></div> <div class="field--item"><a href="/tag/mapreduce" hreflang="zh-hans">MapReduce</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> </div> </div> Fri, 17 Jan 2020 06:06:50 +0000 随声附和 3251190 at https://www.e-learn.cn How to specify the partitioner for hadoop streaming https://www.e-learn.cn/topic/3226599 <span>How to specify the partitioner for hadoop streaming</span> <span><span lang="" about="/user/92" typeof="schema:Person" property="schema:name" datatype="">可紊</span></span> <span>2020-01-15 09:55:21</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I have a custom partitioner like below:</p> <pre><code>import java.util.*; import org.apache.hadoop.mapreduce.*; public static class SignaturePartitioner extends Partitioner&lt;Text,Text&gt; { @Override public int getPartition(Text key,Text value,int numReduceTasks) { return (key.toString().Split(' ')[0].hashCode() &amp; Integer.MAX_VALUE) % numReduceTasks; } } </code></pre> <p>I set the hadoop streaming parameter like below</p> <pre><code> -file SignaturePartitioner.java \ -partitioner SignaturePartitioner \ </code></pre> <p>Then I get an error: Class Not Found. </p> <p>Do you know what's the problem?</p> <p>Best Regards,</p> <br /><h3>回答1:</h3><br /><p>I faced the same issue, but managed to solve after lot of research.</p> <p>Root cause is streaming-2.6.0.jar uses mapred api and not mapreduce api. Also, implement Partitioner interface, and not extend Partitioner class. The following worked for me:</p> <pre><code>import java.io.IOException; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.Partitioner; import org.apache.hadoop.mapred.JobConf;` public class Mypartitioner implements Partitioner&lt;Text, Text&gt; {` public void configure(JobConf job) {} public int getPartition(Text pkey, Text pvalue, int pnumparts) { if (pkey.toString().startsWith("a")) return 0; else return 1 ; } } </code></pre> <p>compile Mypartitioner, create jar, and then,</p> <pre><code>bin/hadoop jar share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar -libjars /home/sanjiv/hadoop-2.6.0/Mypartitioner.jar -D mapreduce.job.reduces=2 -files /home/sanjiv/mymapper.sh,/home/sanjiv/myreducer.sh -input indir -output outdir -mapper mymapper.sh -reducer myreducer.sh -partitioner Mypartitioner </code></pre> <br /><br /><br /><h3>回答2:</h3><br /><blockquote> <p>-file SignaturePartitioner.java -partitioner SignaturePartitioner</p> </blockquote> <p>The -file option will make the file available on all the required nodes by the Hadoop framework. It needs to point to the class name and not the Java file name.</p> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/13191468/how-to-specify-the-partitioner-for-hadoop-streaming</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/hadoop" hreflang="zh-hans">Hadoop</a></div> <div class="field--item"><a href="/tag/mapreduce" hreflang="zh-hans">MapReduce</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> <div class="field--item"><a href="/tag/hadoop-partitioning" hreflang="zh-hans">hadoop-partitioning</a></div> </div> </div> Wed, 15 Jan 2020 01:55:21 +0000 可紊 3226599 at https://www.e-learn.cn Permission denied error 13 - Python on Hadoop https://www.e-learn.cn/topic/3224113 <span>Permission denied error 13 - Python on Hadoop</span> <span><span lang="" about="/user/191" typeof="schema:Person" property="schema:name" datatype="">末鹿安然</span></span> <span>2020-01-15 07:12:49</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><h3>问题</h3><br /><p>I am running a simple Python mapper and reducer and am getting <code>13 permission denied error</code>. Need help.</p> <p>I am not sure what is happening here and need help. New to Hadoop world.</p> <p>I am running simple map reduce for counting word. The mapper and reducer are running independently on linus or windows powershell</p> <pre><code>====================================================================== hadoop@ubuntu:~/hadoop-1.2.1$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -file /home/hadoop/mapper.py -mapper mapper.py -file /home/hadoop/reducer.py -reducer reducer.py -input /deepw/pg4300.txt -output /deepw/pg3055 Warning: $HADOOP_HOME is deprecated. packageJobJar: [/home/hadoop/mapper.py, /home/hadoop/reducer.py, /tmp/hadoop-hadoop/hadoop-unjar2961168567699201508/] [] /tmp/streamjob4125164474101219622.jar tmpDir=null 15/09/23 14:39:16 INFO util.NativeCodeLoader: Loaded the native-hadoop library 15/09/23 14:39:16 WARN snappy.LoadSnappy: Snappy native library not loaded 15/09/23 14:39:16 INFO mapred.FileInputFormat: Total input paths to process : 1 15/09/23 14:39:16 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-hadoop/mapred/local] 15/09/23 14:39:16 INFO streaming.StreamJob: Running job: job_201509231312_0003 15/09/23 14:39:16 INFO streaming.StreamJob: To kill this job, run: 15/09/23 14:39:16 INFO streaming.StreamJob: /home/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job -Dmapred.job.tracker=192.168.56.102:9001 -kill job_201509231312_0003 15/09/23 14:39:16 INFO streaming.StreamJob: Tracking URL: http://192.168.56.102:50030/jobdetails.jsp?jobid=job_201509231312_0003 15/09/23 14:39:17 INFO streaming.StreamJob: map 0% reduce 0% 15/09/23 14:39:41 INFO streaming.StreamJob: map 100% reduce 100% 15/09/23 14:39:41 INFO streaming.StreamJob: To kill this job, run: 15/09/23 14:39:41 INFO streaming.StreamJob: /home/hadoop/hadoop-1.2.1/libexec/../bin/hadoop job -Dmapred.job.tracker=192.168.56.102:9001 -kill job_201509231312_0003 15/09/23 14:39:41 INFO streaming.StreamJob: Tracking URL: http://192.168.56.102:50030/jobdetails.jsp?jobid=job_201509231312_0003 15/09/23 14:39:41 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201509231312_0003_m_000000 15/09/23 14:39:41 INFO streaming.StreamJob: killJob... Streaming Command Failed! ================================================================ java.io.IOException: Cannot run program "/tmp/hadoop-hadoop/mapred/local/taskTracker/hadoop/jobcache/job_201509231312_0003/attempt_201509231312_0003_m_000001_3/work/./mapper.py": error=13, Permission denied at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:214) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: java.io.IOException: error=13, Permission denied at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more </code></pre> <br /><h3>回答1:</h3><br /><p>It seems your mapper file is not executable. Try <code>chmod a+x mapper.py</code> before submitting your job.</p> <p>Alternatively, in your command, you can replace</p> <pre><code>-mapper mapper.py </code></pre> <p>with</p> <pre><code>-mapper "python mapper.py" </code></pre> <br /><br /><br /><h3>回答2:</h3><br /><p>As a note, I recently also had this error13 problem. However, in my case, the problem was that the directory the python executable and the mappers/reducers were in had a permissions problem; it was not readable by others. After a chmod a+rx , my problem was fixed. </p> <br /><br /><br /><h3>回答3:</h3><br /><p>After doing <code>chmod a+x</code> for mapper and reduce <code>.py</code> files I am getting below exceptions (with python keyword added to mapper it works fine and produces right results).</p> <pre><code>======================================================================================== 5-09-28 13:25:16,572 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library 2015-09-28 13:25:16,752 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/jars/META-INF &lt;- /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/META-INF 2015-09-28 13:25:16,761 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/jars/reducer.py &lt;- /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/reducer.py 2015-09-28 13:25:16,763 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/jars/job.jar &lt;- /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/job.jar 2015-09-28 13:25:16,766 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/jars/.job.jar.crc &lt;- /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/.job.jar.crc 2015-09-28 13:25:16,769 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/jars/org &lt;- /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/org 2015-09-28 13:25:16,771 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/jars/mapper.py &lt;- /deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/mapper.py 2015-09-28 13:25:17,046 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 2015-09-28 13:25:17,176 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0 2015-09-28 13:25:17,184 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1e7c7fb 2015-09-28 13:25:17,254 INFO org.apache.hadoop.mapred.MapTask: Processing split: hdfs://192.168.56.101:9000/swad/4300.txt:0+786539 2015-09-28 13:25:17,275 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library not loaded 2015-09-28 13:25:17,287 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1 2015-09-28 13:25:17,296 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100 2015-09-28 13:25:17,393 INFO org.apache.hadoop.mapred.MapTask: data buffer = 79691776/99614720 2015-09-28 13:25:17,393 INFO org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680 2015-09-28 13:25:17,419 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed exec [/deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/./mapper.py] 2015-09-28 13:25:17,436 ERROR org.apache.hadoop.streaming.PipeMapRed: configuration exception java.io.IOException: Cannot run program "/deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:214) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 2015-09-28 13:25:17,462 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1 2015-09-28 13:25:17,495 INFO org.apache.hadoop.io.nativeio.NativeIO: Initialized cache for UID to User mapping with a cache timeout of 14400 seconds. 2015-09-28 13:25:17,496 INFO org.apache.hadoop.io.nativeio.NativeIO: Got UserName hadoop for UID 1000 from the native implementation 2015-09-28 13:25:17,498 WARN org.apache.hadoop.mapred.Child: Error running child java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) ... 9 more Caused by: java.lang.RuntimeException: Error in configuring object at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34) ... 14 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88) ... 17 more Caused by: java.lang.RuntimeException: configuration exception at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:230) at org.apache.hadoop.streaming.PipeMapper.configure(PipeMapper.java:66) ... 22 more Caused by: java.io.IOException: Cannot run program "/deep/mapred/local/taskTracker/hadoop/jobcache/job_201509281234_0015/attempt_201509281234_0015_m_000000_3/work/./mapper.py": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047) at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:214) ... 23 more Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.&lt;init&gt;(UNIXProcess.java:186) at java.lang.ProcessImpl.start(ProcessImpl.java:130) at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028) ... 24 more 2015-09-28 13:25:17,506 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task </code></pre> <br /><br /><br /><h3>回答4:</h3><br /><p>I also struggled with this as well. I found that when I was running on a single node, the Cloudera QuickStart VM, it all worked but on a cluster it didn't. It seems the python scripts are not being shipped to the nodes for execution.</p> <p>There is another parameter "-file" which ships a file or directory as part of the job. It is mentioned here:</p> <p>https://wiki.apache.org/hadoop/HadoopStreaming</p> <p>You can specify this file multiple times, once for the mapper and again for the reducer, like this:</p> <p>hadoop jar /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce/hadoop-streaming.jar -input /user/linux/input -output /user/linux/output_new -mapper wordcount_mapper.py -reducer wordcount_reducer.py -file /home/linux/wordcount_mapper.py -file /home/linux/wordcount_reducer.py</p> <p>or you can package the scripts in a directory and ship just the directory, like this:</p> <p>hadoop jar /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop-mapreduce/hadoop-streaming.jar -input /user/linux/input -output /user/linux/output_new -mapper wc/wordcount_mapper.py -reducer wc/wordcount_reducer.py -file /home/linux/wc</p> <p>Note here I refer to the mapper and reducer scripts with a relative path.</p> <p>The comment about the file being readable and executable is also correct.</p> <p>It took me a while to work this out. I hope it helps.</p> <br /><br /><p>来源:<code>https://stackoverflow.com/questions/32735668/permission-denied-error-13-python-on-hadoop</code></p></div> <div class="field field--name-field-tags field--type-entity-reference field--label-above"> <div class="field--label">标签</div> <div class="field--items"> <div class="field--item"><a href="/tag/python-27" hreflang="zh-hans">python-2.7</a></div> <div class="field--item"><a href="/tag/hadoop-streaming" hreflang="zh-hans">hadoop-streaming</a></div> </div> </div> Tue, 14 Jan 2020 23:12:49 +0000 末鹿安然 3224113 at https://www.e-learn.cn