Protobuf RPC not available on Hadoop 2.2.0 single node server?

戏子无情 提交于 2019-12-07 18:47:48

问题


I am trying to run a hadoop 2.2.0 mapreduce job on my local single node cluster installed by following this tutorial: http://codesfusion.blogspot.co.at/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1

Though on the server side the following exception is thrown:

org.apache.hadoop.ipc.RpcNoSuchProtocolException: Unknown protocol: org.apache.hadoop.yarn.api.ApplicationClientProtocolPB
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:527)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:566)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

Is there a way for me to configure Protobuf RPC to be available on server side? Do I need the hadoop native libraries for this? Or can I switch somehow on the client side to Writables/Avro RPC?


回答1:


Ok, found the reason, I connected to the wrong port for the yarn resourcemanager. The correct configuration is: yarn.resourcemanager.address=localhost:8032




回答2:


In my case, I was getting the same error in the logs when there was not enough memory between the Application Master and the YARN containers. Reduced the yarn.app.mapreduce.am.resource.mb property and it worked on my single node installation.



来源:https://stackoverflow.com/questions/20163657/protobuf-rpc-not-available-on-hadoop-2-2-0-single-node-server

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!