hadoop-streaming: reducer in pending state, doesn't start?

半城伤御伤魂 提交于 2019-12-10 12:08:40

问题


I have a map reduce job which was running fine until I started to see some failed map tasks like

attempt_201110302152_0003_m_000010_0    task_201110302152_0003_m_000010 worker1 FAILED  
Task attempt_201110302152_0003_m_000010_0 failed to report status for 602 seconds. Killing!
-------
Task attempt_201110302152_0003_m_000010_0 failed to report status for 607 seconds. Killing!
Last 4KB
Last 8KB
All
attempt_201110302152_0003_m_000010_1    task_201110302152_0003_m_000010 master  FAILED  
java.lang.RuntimeException: java.io.IOException: Spill failed
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:325)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:261)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:255)
Caused by: java.io.IOException: Spill failed
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1029)
    at org.apache.hadoop.mapred.MapTask$OldOutputCollector.collect(MapTask.java:592)
    at org.apache.hadoop.streaming.PipeMapRed$MROutputThread.run(PipeMapRed.java:381)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/spill11.out
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:381)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:127)
    at org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:121)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1392)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$1800(MapTask.java:853)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$SpillThread.run(MapTask.java:1344)
Last 4KB
Last 8KB
All

and now reducer doesn't start executing while earlier the reducer used to start copying the data even while map tasks are running, all I see is this

11/10/31 03:35:12 INFO streaming.StreamJob:  map 95%  reduce 0%
11/10/31 03:44:01 INFO streaming.StreamJob:  map 96%  reduce 0%
11/10/31 03:51:56 INFO streaming.StreamJob:  map 97%  reduce 0%
11/10/31 03:55:41 INFO streaming.StreamJob:  map 98%  reduce 0%
11/10/31 04:04:18 INFO streaming.StreamJob:  map 99%  reduce 0%
11/10/31 04:20:32 INFO streaming.StreamJob:  map 100%  reduce 0%

I am newbie to hadoop and mapreduce and doesn't really know what might be causing the same code to fail which was running successfully earlier

Please help

Thank you


回答1:


You should have a look at mapred.task.timeout. If you have a very large amount of data and few machines to process it, your task might be timing out. You can set this value to 0 which disables this timeout.

Alternatively, if you can call context.progress or some equivalent function to say that something is happening so that the job doesn't timeout.




回答2:


I had this same problem and there were two things I did to resolve it:

The first is to compress the output of your mapper, use mapred.output.compress=true. As your mapper runs, the output is spilled to disk (written to disk) and sometimes that output needs to be sent to a reducer on another machine. Compressing the output will reduce network traffic, as well as reduce the amount of disk required on the machine running the mapper.

The second thing I did was increase the ulimits for the hdfs and mapred users. I added these lines to /etc/security/limits.conf

mapred      soft    nproc       16384
mapred      soft    nofile      16384
hdfs        soft    nproc       16384
hdfs        soft    nofile      16384
hbase       soft    nproc       16384
hbase       soft    nofile      16384

This post has a more thorough explanation: http://www.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/



来源:https://stackoverflow.com/questions/7956644/hadoop-streaming-reducer-in-pending-state-doesnt-start

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!