hadoop hdfs points to file:/// not hdfs://

二次信任 提交于 2021-02-07 13:44:02

问题


So I installed Hadoop via Cloudera Manager cdh3u5 on CentOS 5. When I run cmd

hadoop fs -ls /

I expected to see the contents of hdfs://localhost.localdomain:8020/

However, it had returned the contents of file:///

Now, this goes without saying that I can access my hdfs:// through

hadoop fs -ls hdfs://localhost.localdomain:8020/

But when it came to installing other applications such as Accumulo, accumulo would automatically detect Hadoop Filesystem in file:///

Question is, has anyone ran into this issue and how did you resolve it?

I had a look at HDFS thrift server returns content of local FS, not HDFS , which was a similar issue, but did not solve this issue. Also, I do not get this issue with Cloudera Manager cdh4.


回答1:


By default, Hadoop is going to use local mode. You probably need to set fs.default.name to hdfs://localhost.localdomain:8020/ in $HADOOP_HOME/conf/core-site.xml.

To do this, you add this to core-site.xml:

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost.localdomain:8020/</value>
</property>

The reason why Accumulo is confused is because it's using the same default configuration to figure out where HDFS is... and it's defaulting to file://




回答2:


We should specify data node data directory and name node meta data directory.

dfs.name.dir,

dfs.namenode.name.dir,

dfs.data.dir,

dfs.datanode.data.dir,

fs.default.name

in core-site.xml file and format name node.

To format HDFS Name Node:

hadoop namenode -format

Enter 'Yes' to confirm formatting name node. Restart HDFS service and deploy client configuration to access HDFS.

If you have already did the above steps. Ensure client configuration is deployed correctly and it points to the actual cluster endpoints.



来源:https://stackoverflow.com/questions/12391226/hadoop-hdfs-points-to-file-not-hdfs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!