Change ssh default port in hadoop multi cluster [closed]

大兔子大兔子 提交于 2020-01-02 07:15:16

问题


My Hadoop muti node cluster has 3 nodes, one namenode and two datanodes, I am using Hbase for storing data, due to some reasons I want to change default ssh port number which I know how to do, but if I change that, what configuration changes I will have to make in hadoop and hbase?

I saw link , this link just explains the change in configuration for hadoop, but I think configuration of Hbase, Zookeper and Yarn also needs to be changed. Am I right? If yes, what changes I need to do in hadoop and hbase?

Hadoop verison 2.7.1

HBase version 1.0.1.1

Help Appreciated :)


回答1:


SSH isn't a Hadoop managed configuration, and therefore has nothing to do with Spark, Hbase, Zookeper or Yarn other than adding new nodes to the cluster and inter-process communication.

You'll have to edit /etc/ssh/sshd_config on every node to change any SSH related settings. Then restart all the Hadoop services as well as sshd.

The related line is

Port 22

Change the port number, then do

sudo service sshd restart

In hadoop-env.sh there is the HADOOP_SSH_OPTS environment variable. I'm not really sure what it does, but you are welcome to try and set a port like so.

export HADOOP_SSH_OPTS="-p <num>"

Also not sure about this one, but in hbase-env.sh

export HBASE_SSH_OPTS="-p <num>"

Once done setting all the configs, restart the Hadoop services

stop-all.sh
start-all.sh


来源:https://stackoverflow.com/questions/35224304/change-ssh-default-port-in-hadoop-multi-cluster

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!