hdfs - ls: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException:

…衆ロ難τιáo~ 提交于 2020-01-02 01:14:05

问题


I am trying to use the below to list my dirs in hdfs:

ubuntu@ubuntu:~$ hadoop fs -ls hdfs://127.0.0.1:50075/ 
ls: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: 
Protocol    message end-group tag did not match expected tag.; 
Host Details : local host is: "ubuntu/127.0.0.1"; destination host is: "ubuntu":50075; 

Here is my /etc/hosts file

127.0.0.1       ubuntu localhost
#127.0.1.1      ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

How do I properly use hdfs:// to list my dirs?

I am using couldera 4.3 on ubuntu 12.04


回答1:


HDFS is not running at 50075. To check your hdfs port use the following command in linux

hdfs getconf -confKey fs.default.name

You will get the output something like

hdfs://hmaster:54310

And correct your URL accordingly




回答2:


On your cloudera manager, check on the Name node for configuration item "NameNode Service RPC Port" OR "dfs.namenode.servicerpc-address". Add the same port number from there on the URL. And it should work fine.




回答3:


In /usr/local/hadoop/etc/hadoop/core-site.xml

In place of localhost, use 0.0.0.0 i.e..

Change <value>hdfs://localhost:50075</value> to

<value>hdfs://0.0.0.0:50075</value>

This solved the problem for me




回答4:


Is your NN running on port 50075? You actually don't have to do that if you just want to list down all the directories. Simply use hadoop fs -ls /. This will list all your directories under your root directory.




回答5:


Can you check your hostname?. The same name(ubuntu) should be there in your /etc/hostname file and /etc/hosts file.




回答6:


Make sure that your tcp port of NN is on 50075 which is defined in hdfs-site.xml

<property>
<name>dfs.namenode.rpc-address.nn1</name>
<value>r101072036.sqa.zmf:9000</value>
</property>

my problem is that I use http-address port to connect with NN, this cause the same exception as you.

the http port is also configured in hdfs-site.xml:

<property>
<name>dfs.namenode.http-address.nn1</name>
<value>r101072036.sqa.zmf:8000</value>
</property>



回答7:


This error arise because of :

  1. It is not able to contact with namenode
  2. Namenode might be not running (you can check it by running jps command.)
  3. kill what ever is running in that particular port

check what is running in particular port by netstat -tulpn | grep 8080 and kill -9 <PID>

  1. Restart the namenode


来源:https://stackoverflow.com/questions/16372997/hdfs-ls-failed-on-local-exception-com-google-protobuf-invalidprotocolbuffere

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!