What is the reason for having Writable wrapper classes in Hadoop MapReduce for Java types?

こ雲淡風輕ζ 提交于 2019-12-01 08:30:27

问题


It seems to me that a org.apache.hadoop.io.serializer.Serialization could be written to serialize the java types directly in the same format that the wrapper classes serialize the type into. That way the Mappers and Reducers don't have to deal with the wrapper classes.


回答1:


There is nothing stopping you changing the serialization to use a different mechanism such as java Serializable interface or something like thrift, protocol buffers etc.

In fact, Hadoop comes with an (experimental) Serialization implementation for Java Serializable objects - just configure the serialization factory to use it. The default serialization mechanism is WritableSerialization, but this can be changed by setting the following configuration property:

io.serializations=org.apache.hadoop.io.serializer.JavaSerialization

Bear in mind however that anything that expects a Writable (Input/Output formats, partitioners, comparators) etc will need to be replaced by versions that can be passed a Serializable instance rather than a Writable instance.

Some more links for the curious reader:

  • http://www.tom-e-white.com/2008/07/rpc-and-serialization-with-hadoop.html
  • What are the connections and differences between Hadoop Writable and java.io.serialization? - Which seems to be a similar question to what you're asking, and Tariq has a good link to a thread in which Doug Cutting explains the rationale behind using Writables over Serializables


来源:https://stackoverflow.com/questions/17203661/what-is-the-reason-for-having-writable-wrapper-classes-in-hadoop-mapreduce-for-j

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!