Zookeeper error: Cannot open channel to X at election address

前端 未结 10 1225
你的背包
你的背包 2020-12-13 06:15

I have installed zookeeper in 3 different aws servers. The following is the configuration in all the servers

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/         


        
相关标签:
10条回答
  • 2020-12-13 06:40

    I had the same error log, in my case, i use hostname of my node in zookeeper.conf.

    My nodes were on virtual machine in Centos 8.

    Like @user2286693 said, my mistake was the resolution mechanism:

    Since node1, when I ping node1:

    PING node1(localhost (::1)) 56 data bytes
    

    I check my /etc/hosts file and I find:

    127.0.0.1   localhost localhost.localdomain localhost4 
    localhost4.localdomain4 node1
    

    I replace this line by:

    127.0.0.1   localhost localhost.localdomain localhost4 
    localhost4.localdomain4
    

    and it's working!

    Hope this help someone!

    0 讨论(0)
  • 2020-12-13 06:41

    Had similar issues on a 3-Node zookeeper ensemble. Solution was as advised by espeirasbora and restarted.

    So this was what I did

    zookeeper1,zookeeper2 and zookeeper3

    A. Issue :: znodes in my ensemble could not start

    B. System SetUp :: 3 Znodes in three 3 machines

    C. Error::

    In my zookeper log file I could see the following errors

    2016-06-26 14:10:17,484 [myid:1] - WARN  [SyncThread:1:FileTxnLog@334] - fsync-ing the write ahead log in SyncThread:1 took 1340ms which will adversely effect operation latency. See the ZooKeeper troubleshooting guide
    2016-06-26 14:10:17,847 [myid:1] - WARN  [RecvWorker:2:QuorumCnxManager$RecvWorker@810] - Connection broken for id 2, my id = 1, error = 
    java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:795)
    2016-06-26 14:10:17,848 [myid:1] - WARN  [RecvWorker:2:QuorumCnxManager$RecvWorker@813] - Interrupting SendWorker
    2016-06-26 14:10:17,849 [myid:1] - WARN  [SendWorker:2:QuorumCnxManager$SendWorker@727] - Interrupted while waiting for message on queue
    java.lang.InterruptedException
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
        at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:879)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$500(QuorumCnxManager.java:65)
        at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:715)
    2016-06-26 14:10:17,851 [myid:1] - WARN  [SendWorker:2:QuorumCnxManager$SendWorker@736] - Send worker leaving thread
    2016-06-26 14:10:17,852 [myid:1] - WARN  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@89] - Exception when following the leader
    java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
        at org.apache.zookeeper.server.quorum.QuorumPacket.deserialize(QuorumPacket.java:83)
        at org.apache.jute.BinaryInputArchive.readRecord(BinaryInputArchive.java:99)
        at org.apache.zookeeper.server.quorum.Learner.readPacket(Learner.java:153)
        at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:85)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:846)
    2016-06-26 14:10:17,854 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:Follower@166] - shutdown called
    java.lang.Exception: shutdown Follower
    

    D. Actions & Resolution ::

    On each znode a. I modified the configuration file $ZOOKEEPER_HOME/conf/zoo.cfg to set the machines IP to "0.0.0.0" while maintaining the IP addressof the other 2 znodes. b. restarted the znode c. checked the status d.Voila , I was ok

    See below

    -------------------------------------------------

    on Zookeeper1

    #Before modification 
    [zookeeper1]$ tail -3   $ZOOKEEPER_HOME/conf/zoo.cfg 
    server.1=zookeeper1:2888:3888
    server.2=zookeeper2:2888:3888
    server.3=zookeeper3:2888:3888
    
    #After  modification 
    [zookeeper1]$ tail -3  $ZOOKEEPER_HOME/conf/zoo.cfg 
    server.1=0.0.0.0:2888:3888
    server.2=zookeeper2:2888:3888
    server.3=zookeeper3:2888:3888
    
    #Start the Zookeper (Stop and STart or restart )
    [zookeeper1]$ $ZOOKEEPER_HOME/bin/zkServer.sh  start
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 52128
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
    Mode: follower
    
    [zookeeper1]$ $ZOOKEEPER_HOME/bin/zkServer.sh  status
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 52128
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
    Mode: follower
    

    ---------------------------------------------------------

    on Zookeeper2

    #Before modification 
    [zookeeper2]$ tail -3   $ZOOKEEPER_HOME/conf/zoo.cfg 
    server.1=zookeeper1:2888:3888
    server.2=zookeeper2:2888:3888
    server.3=zookeeper3:2888:3888
    
    #After  modification 
    [zookeeper2]$ tail -3  $ZOOKEEPER_HOME/conf/zoo.cfg 
    server.1=zookeeper1:2888:3888
    server.2=0.0.0.0:2888:3888
    server.3=zookeeper3:2888:3888
    
    #Start the Zookeper (Stop and STart or restart )
    [zookeeper2]$ $ZOOKEEPER_HOME/bin/zkServer.sh  start
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 52128
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
    Mode: follower
    
    [zookeeper2]$ $ZOOKEEPER_HOME/bin/zkServer.sh  status
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 52128
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
    Mode: follower
    

    ---------------------------------------------------------

    on Zookeeper3

    #Before modification 
    [zookeeper3]$ tail -3   $ZOOKEEPER_HOME/conf/zoo.cfg 
    server.1=zookeeper1:2888:3888
    server.2=zookeeper2:2888:3888
    server.3=zookeeper3:2888:3888
    
    #After  modification 
    [zookeeper3]$ tail -3  $ZOOKEEPER_HOME/conf/zoo.cfg 
    server.1=zookeeper1:2888:3888
    server.2=zookeeper2:2888:3888
    server.3=0.0.0.0:2888:3888
    
    #Start the Zookeper (Stop and STart or restart )
    [zookeeper3]$ $ZOOKEEPER_HOME/bin/zkServer.sh  start
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 52128
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
    Mode: follower
    
    [zookeeper3]$ $ZOOKEEPER_HOME/bin/zkServer.sh  status
    ZooKeeper JMX enabled by default
    ZooKeeper remote JMX Port set to 52128
    ZooKeeper remote JMX authenticate set to false
    ZooKeeper remote JMX ssl set to false
    ZooKeeper remote JMX log4j set to true
    Using config: /opt/zookeeper-3.4.8/bin/../conf/zoo.cfg
    Mode: follower
    
    0 讨论(0)
  • 2020-12-13 06:44

    We faced the same issue , for our case the root cause of the problem is too-many number of client connections . The default ulimit on aws ec2 instance is 1024 and this causes zookeeper nodes not able to communicate with each other .

    The fix for this is change the ulimit to a higher number -> (> ulimit -n 20000 ) stop and start zookeeper.

    0 讨论(0)
  • 2020-12-13 06:46

    How have defined the ip of the local server in each node? If you have given the public ip, then the listener would have failed to connect to the port. You must specify 0.0.0.0 for the current node

    server.1=0.0.0.0:2888:3888
    server.2=192.168.10.10:2888:3888
    server.3=192.168.2.1:2888:3888
    

    This change must be performed at the other nodes too.

    0 讨论(0)
  • 2020-12-13 06:46

    In mycase, the issue was, I had to start all the three zookeeper servers, Only then I was able to connect to zookeeper server using ./zkCli.sh

    0 讨论(0)
  • 2020-12-13 06:47

    I had a similar issue. The status on 2 of my three zookeeper nodes was listed as "standalone", even though the zoo.cfg file indicated that it should be clustered. My third node couldn't start, with the error you described. I think what fixed it for me was running zkServer.sh start in quick succession across my three nodes, such that zookeeper was running before the zoo.cfg initLimit was reached. Hope this works for someone out there.

    0 讨论(0)
提交回复
热议问题