apache-zookeeper

Zookeeper: It is probably not running

扶醉桌前 提交于 2019-12-01 04:10:48
I am trying to start zookeeper on a remote virtual machine. I use this for my project regularly and I do not have any problems while starting the zookeeper. But lately when I am trying to start the server I am getting an error. When I give ./zkServer.sh start it shows zookeeper server started. When I check for status using ./zkServer.sh status it shows "Error contacting service. It is probably not running." I am totally working with 5 Virtual Machines. All these machines were fine initially. I started getting problems with machine 1. But recently I have the same problem with all my virtual

Zookeeper ensemble not coming up

我是研究僧i 提交于 2019-12-01 03:37:00
问题 I am trying to configure the ensemble of 3 nodes following the documentation. All of them are on Linux Ubuntu. on all the three nodes configuration file looks like this : zoo.cfg under $ZOOKEEPER_HOME/conf tickTime=2000 dataDir=/home/zkuser/zookeeper_data clientPort=2181 initLimit=5 syncLimit=2 server.1=ip.of.zk1:2888:3888 server.2=ip.of.zk2:2888:3888 server.3=ip.of.zk3:2888:3888 I've also placed respective " myid " files under /home/zkuser/zookeeper_data/ directory. This myid files contain 1

How do I delete a Kafka Consumer Group to reset offsets?

血红的双手。 提交于 2019-12-01 03:34:15
I want to delete a Kakfa consumer group so that when the application creates a consumer and subscribes to a topic it can start at the beginning of the topic data. This is with a single node development vm using the current latest Confluent Platform 3.1.2 which uses Kafka 0.10.1.1. I try the normal syntax: sudo /usr/bin/kafka-consumer-groups --new-consumer --bootstrap-server localhost:9092 --delete --group my_consumer_group I get the error: Option [delete] is only valid with [zookeeper]. Note that there's no need to delete group metadata for the new consumer as the group is deleted when the

Hbase error zookeeper exists failed after 3 retiries

☆樱花仙子☆ 提交于 2019-12-01 03:25:07
I am using HBASE 0.94.8 standalone mode in Ubuntu. Its working fine i am able to do every operations in Hbase-shell. But after i logged of my system its giving following error 15/07/28 15:10:30 ERROR zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 3 retries 15/07/28 15:10:30 WARN zookeeper.ZKUtil: hconnection-0x14ed40513350009 Unable to set watcher on znode (/hbase) org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException

Kafka + Zookeeper: Connection to node -1 could not be established. Broker may not be available

余生颓废 提交于 2019-12-01 02:22:47
I am running in my locahost both Zookeeper and Kafka (1 instance each). I create succesfully a topic from kafka: ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic Hello-Nicola Created topic "Hello-Nicola". Kafka logs show: [2017-12-06 16:00:17,753] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) [2017-12-06 16:03:19,347] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Hello-Nicola-0 (kafka.server.ReplicaFetcherManager) [2017-12-06 16:03:19,393] INFO Loading producer state from offset 0 for partition Hello

Zookeeper: It is probably not running

安稳与你 提交于 2019-12-01 01:53:13
问题 I am trying to start zookeeper on a remote virtual machine. I use this for my project regularly and I do not have any problems while starting the zookeeper. But lately when I am trying to start the server I am getting an error. When I give ./zkServer.sh start it shows zookeeper server started. When I check for status using ./zkServer.sh status it shows "Error contacting service. It is probably not running." I am totally working with 5 Virtual Machines. All these machines were fine initially.

Using Zookeeper with Solr but only have 2 servers

半世苍凉 提交于 2019-12-01 00:57:06
I am new to Solr and am experimenting with SolrCloud - and it seems that ZooKeeper is the best way to manage high availability. However, in our production environment we only have two servers (active-active) and I am concerned that Zookeeper is not ideal on two servers because if either of them goes down the whole ensemble stops working. The workaround so far is to run two ZKs on server1 and one ZK on server2, so that at least if server2 goes down we still have quorum (but if server1 goes down, game over). What is the best practice / recommended solution for Solr in this scenario? Can it

Using Zookeeper with Solr but only have 2 servers

核能气质少年 提交于 2019-11-30 19:58:37
问题 I am new to Solr and am experimenting with SolrCloud - and it seems that ZooKeeper is the best way to manage high availability. However, in our production environment we only have two servers (active-active) and I am concerned that Zookeeper is not ideal on two servers because if either of them goes down the whole ensemble stops working. The workaround so far is to run two ZKs on server1 and one ZK on server2, so that at least if server2 goes down we still have quorum (but if server1 goes

Using ACL with Curator

て烟熏妆下的殇ゞ 提交于 2019-11-30 16:33:08
问题 Using CuratorFramework, could someone explain how I can: Create a new path Set data for this path Get this path Using username foo and password bar ? Those that don't know this user/pass would not be able to do anything. I don't care about SSL or passwords being sent via plaintext for the purpose of this question. 回答1: ACL in Apache Curator are for access control. Therefore, ZooKeeper do not provide any authentication mechanism like, clients who don't have correct password cannot connect to

Kafka - Broker: Group coordinator not available

你离开我真会死。 提交于 2019-11-30 14:37:38
I have the following structure: zookeeper: 3.4.12 kafka: kafka_2.11-1.1.0 server1: zookeeper + kafka server2: zookeeper + kafka server3: zookeeper + kafka Created topic with replication factor 3 and partitions 3 by kafka-topics shell script. ./kafka-topics.sh --create --zookeeper localhost:2181 --topic test-flow --partitions 3 --replication-factor 3 And use group localConsumers. it works fine when leader is ok. ./kafka-topics.sh --describe --zookeeper localhost:2181 --topic test-flow Topic:test-flow PartitionCount:3 ReplicationFactor:3 Configs: Topic: test-flow Partition: 0 Leader: 3 Replicas: