因为存在docker容器 跨主机网络通信,所以可以先看这一篇:
【docker系列】解决阿里云多主机服务器,docker overlay 跨主机网络通信环境
环境:
三台为:11.11.11.11 、11.11.11.22 、11.11.11.33
每台主机部署一个zookeeper节点,一个kafka节点,共三个zookeeper节点,三个kafka节点,
容器之间的网络采用overlay模式
一、创建 overlay 网络
# 创建 overlay 网卡,用于 集群服务的网卡设置(只需要在 master 节点上创建,从节点自然会获取)
docker network create --driver overlay --subnet=15.0.0.0/24 --gateway=15.0.0.254 --attachable ccluster-overlay-elk
二、创建并运行容器
1、创建容器
[root@master conf]# sudo docker run -dit \
--net cluster-overlay-elk \
--ip 15.0.0.250 \
--restart=always \
--privileged=true \
--hostname=hadoop_zookeeper \
--name=hadoop-zookeeper-one \
-p 12181:2181 \
-v /usr/docker/software/zookeeper/data/:/data/ \
-v /usr/docker/software/zookeeper/datalog/:/datalog/ \
-v /usr/docker/software/zookeeper/logs/:/logs/ \
-v /usr/docker/software/zookeeper/conf/:/conf/ \
-v /usr/docker/software/zookeeper/bin/:/apache-zookeeper-3.5.6-bin/bin/ \
-v /etc/localtime:/etc/localtime \
-e TZ='Asia/Shanghai' \
-e LANG="en_US.UTF-8" \
zookeeper:3.5.6
2、更改配置文件 zoo.cfg
zoo.cfg 文件在 容器里面的目录为:/conf 因为我已经挂载到宿主机下的 /usr/docker/software/zookeeper/conf/ 。所以,直接进入宿主机目录,修改并重启容器:
[root@master conf]# clear
[root@master conf]# pwd
/usr/docker/software/zookeeper/conf
[root@master conf]# ll
total 8
-rw-r--r-- 1 mysql mysql 308 Jan 6 10:37 zoo.cfg
-rw-r--r-- 1 mysql mysql 146 Jan 6 11:05 zoo.cfg.dynamic.next
[root@master conf]# vim zoo.cfg
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
clientPort=2181
server.10=15.0.0.250:2888:3888
server.11=15.0.0.249:2888:3888
server.12=15.0.0.248:2888:3888
myid=10
说明:
clientPort: 表示 节点 端口
server.10 、server.11、server.12 :表示 三台的集群通信服务的 ip 地址 和端口。,
myid: 表示该节点的 id [并与server 中的 其中一个,一致]
注:
每台机器都要配置,注意myid需要不同,myid文件在该镜像中/data 目录下,对应宿主机为 /usr/docker/software/zookeeper/data
[root@master data]# clear
[root@master data]# pwd
/usr/docker/software/zookeeper/data
[root@master data]# ll
total 8
-rw-r--r-- 1 mysql mysql 3 Jan 6 10:38 myid
drwxr-xr-x 2 mysql mysql 4096 Jan 6 11:10 version-2
[root@master data]# vim myid
10
~
~
3台 都配置好了之后,重启容器。
三、查看zookeeper 运行情况
[root@master data]# clear
#进入容器
[root@master data]# docker exec -it ba49a577b975 /bin/bash
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# ls
LICENSE.txt NOTICE.txt README.md README_packaging.txt bin conf docs lib
#查看zookeeper运行情况
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.250 2181
stat is not executed because it is not in the whitelist.
错误信息:
stat is not executed because it is not in the whitelist.
解决方式:
进入宿主机的 挂载目录 bin
[root@master bin]# clear
[root@master bin]# pwd
/usr/docker/software/zookeeper/bin
[root@master bin]# ll
total 56
-rwxr-xr-x 1 root root 232 Oct 5 19:27 README.txt
-rwxr-xr-x 1 root root 2067 Oct 9 04:14 zkCleanup.sh
-rwxr-xr-x 1 root root 1154 Oct 9 04:14 zkCli.cmd
-rwxr-xr-x 1 root root 1621 Oct 9 04:14 zkCli.sh
-rwxr-xr-x 1 root root 1766 Oct 9 04:14 zkEnv.cmd
-rwxr-xr-x 1 root root 3690 Oct 5 19:27 zkEnv.sh
-rwxr-xr-x 1 root root 1286 Oct 5 19:27 zkServer.cmd
-rwxr-xr-x 1 root root 4573 Oct 9 04:14 zkServer-initialize.sh
-rwxr-xr-x 1 root root 9552 Jan 6 19:14 zkServer.sh
-rwxr-xr-x 1 root root 996 Oct 5 19:27 zkTxnLogToolkit.cmd
-rwxr-xr-x 1 root root 1385 Oct 5 19:27 zkTxnLogToolkit.sh
[root@master bin]# vim zkServer.sh
......
......
echo "ZooKeeper remote JMX Port set to $JMXPORT" >&2
echo "ZooKeeper remote JMX authenticate set to $JMXAUTH" >&2
echo "ZooKeeper remote JMX ssl set to $JMXSSL" >&2
echo "ZooKeeper remote JMX log4j set to $JMXLOG4J" >&2
ZOOMAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain"
fi
else
echo "JMX disabled by user request" >&2
ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
fi
#添加VM环境变量-Dzookeeper.4lw.commands.whitelist=*,便可以把所有指令添加到白名单
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"
if [ "x$SERVER_JVMFLAGS" != "x" ]
then
JVMFLAGS="$SERVER_JVMFLAGS $JVMFLAGS"
fi
........
........
#添加VM环境变量-Dzookeeper.4lw.commands.whitelist=*,便可以把所有指令添加到白名单
ZOOMAIN="-Dzookeeper.4lw.commands.whitelist=* ${ZOOMAIN}"
重启docker 容器,再次进入容器,查看zookeeper状态
[root@master data]# docker exec -it ba49a577b975 /bin/bash
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.250 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
/15.0.0.250:45670[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x30000052b
Mode: follower
Node count: 162
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.249 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
/15.0.0.250:45644[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/1
Received: 460
Sent: 459
Connections: 4
Outstanding: 0
Zxid: 0x30000052b
Mode: follower
Node count: 162
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin# echo stat | nc 15.0.0.248 2181
Zookeeper version: 3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT
Clients:
/15.0.0.250:52766[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x400000000
Mode: leader
Node count: 162
Proposal sizes last/min/max: -1/-1/-1
root@hadoop_zookeeper:/apache-zookeeper-3.5.6-bin#
可以看到zookeeper集群启动成功,并自动选举了lader
四、创建kafka集群
来源:CSDN
作者:老新人
链接:https://blog.csdn.net/weixin_42697074/article/details/103862476