Hadoop Spill failure

廉价感情. 提交于 2019-12-04 07:59:24

Ok, all problems are solved.

The Map-Reduce serialization operation needs intern a default constructor for org.apache.hadoop.io.ArrayWritable.
Hadoops implementation didn't provide a default constructor for ArrayWritable.
That's why the java.lang.NoSuchMethodException: org.apache.hadoop.io.ArrayWritable.() was thrown and caused the weird spill exception.

A simple wrapper made ArrayWritable really writable and fixed it! Strange that Hadoop did not provide this.

This problem came up for me when the output of one of my map jobs produced a tab character ("\t") or newline character ("\r" or "\n") - Hadoop doesn't handle this well and fails. I was able to solve this using this piece of Python code:

if "\t" in output:
  output = output.replace("\t", "")
if "\r" in output:
  output = output.replace("\r", "")
if "\n" in output:
  output = output.replace("\n", "")

You may have to do something else for your app.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!