“start-all.sh” and “start-dfs.sh” from master node do not start the slave node services?

穿精又带淫゛_ 提交于 2019-12-22 08:10:24

问题


I have updated the /conf/slaves file on the Hadoop master node with the hostnames of my slave nodes, but I'm not able to start the slaves from the master. I have to individually start the slaves, and then my 5-node cluster is up and running. How can I start the whole cluster with a single command from the master node?

Also, SecondaryNameNode is running on all the slaves. Is that a problem? If so, how can I remove them from the slaves? I think there should only be one SecondaryNameNode in a cluster with one NameNode, am I right?

Thank you!


回答1:


In Apache Hadoop 3.0 use $HADOOP_HOME/etc/hadoop/workers file to add slave nodes one per line.



来源:https://stackoverflow.com/questions/48910606/start-all-sh-and-start-dfs-sh-from-master-node-do-not-start-the-slave-node-s

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!