apache-zookeeper

Why is kafka not creating a topic? bootstrap-server is not a recognized option

雨燕双飞 提交于 2019-11-29 11:34:02
问题 I am new to Kafka and trying to create a new topic on my local machine. I am following this link. Here are the steps which i followed: Start zookeeper bin/zookeeper-server-start.sh config/zookeeper.properties Start kafka-server bin/kafka-server-start.sh config/server.properties Create a topic bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test but when creating the topic, i am getting the following error: Exception in thread "main"

How to configure dynamic properties while using spring boot?

只谈情不闲聊 提交于 2019-11-29 11:22:22
问题 I'm planning to use Spring Boot for my assignment. Its a typical server application with connection to database. I know I can use Spring Configuration to externalize my properties e.g. db connection details. But I also have other dynamic properties which needs be updated at runtime. e.g. flippers/feature flags. Certain features of my application needs to be controlled dynamically e.g. imagine a property like app.cool-feature.enable=true and then after a while the same feature would be turned

Zookeeper - three nodes and nothing but errors

纵然是瞬间 提交于 2019-11-29 11:15:08
问题 I have three zookeeper nodes. All ports are open. The ip address are correct. Below is my config file. All nodes where booted by chef and all have the same install and config file. # The number of milliseconds of each tick tickTime=3000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/var/lib

How to recover Zookeeper from java.io.EOFException after a server crash?

 ̄綄美尐妖づ 提交于 2019-11-29 10:01:10
How to recover from the following error that started happening after a server crash? Zookeeper won’t start and the following message is showing repeatedly on the log. 2017-05-27 01:02:08,072 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2017-05-27 01:02:08,072 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp 2017-05-27 01:02:08,072 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=<NA> 2017-05-27 01:02:08,072 [myid:] - INFO [main:Environment@100] -

How to implement distributed rate limiter?

筅森魡賤 提交于 2019-11-29 06:47:01
问题 Let's say, I have P processes running some business logic on N physical machines. These processes call some web service S, say. I want to ensure that not more than X calls are made to the service S per second by all the P processes combined. How can such a solution be implemented? Google Guava's Rate Limiter works well for processes running on single box, but not in distributed setup. Are there any standard, ready to use, solutions available for JAVA? [may be based on zookeeper] Thanks! 回答1:

Reloading SolrCloud configuration (stored on Zookeeper) - schema.xml

你说的曾经没有我的故事 提交于 2019-11-28 18:22:51
I have setup a SolrCloud replication using standalone zookeeper. But now I wish to make some changes to my Schema.xml and reload the core. The problem is that when I run a single server Solr (no solrcloud) the new schema is loaded, but I do not know how to reload schema on all the replication server. I tried reloading the schema on one of the server with no desired impact. Is there a way in which I can reload my schema.xml in Solr in distributed replication setup which uses zookeeper. Global Warrior Just found the solution we need to push the changed configuration to zookeeper ensemble. Just

Zookeeper error: Cannot open channel to X at election address

点点圈 提交于 2019-11-28 17:26:29
I have installed zookeeper in 3 different aws servers. The following is the configuration in all the servers tickTime=2000 initLimit=10 syncLimit=5 dataDir=/var/zookeeper clientPort=2181 server.1=x.x.x.x:2888:3888 server.2=x.x.x.x:2888:3888 server.3=x.x.x.x:2888:3888 All the three instance have a myid file at var/zookeeper with appropriate id in it. All the three servers have all ports open from the aws console. But when I run the zookeeper server, I get the following error in all the instances. 2015-06-19 12:09:22,989 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager

Kafka - How to commit offset after every message using High-Level consumer?

女生的网名这么多〃 提交于 2019-11-28 17:16:27
I'm using Kafka's high-level consumer. Because I'm using Kafka as a 'queue of transactions' for my application, I need to make absolutely sure I don't miss or re-read any messages. I have 2 questions regarding this: How do I commit the offset to zookeeper? I will turn off auto-commit and commit offset after every message successfully consumed. I can't seem to find actual code examples of how to do this using high-level consumer. Can anyone help me with this? On the other hand, I've heard committing to zookeeper might be slow, so another way may be to locally keep track of the offsets? Is this

ZooKeeper reliability - three versus five nodes

梦想的初衷 提交于 2019-11-28 17:15:12
From the ZooKeeper FAQ : Reliability: A single ZooKeeper server (standalone) is essentially a coordinator with no reliability (a single serving node failure brings down the ZK service). A 3 server ensemble (you need to jump to 3 and not 2 because ZK works based on simple majority voting) allows for a single server to fail and the service will still be available. So if you want reliability go with at least 3. We typically recommend having 5 servers in "online" production serving environments. This allows you to take 1 server out of service (say planned maintenance) and still be able to sustain

How to get data from old offset point in Kafka?

Deadly 提交于 2019-11-28 16:12:49
I am using zookeeper to get data from kafka. And here I always get data from last offset point. Is there any way to specify the time of offset to get old data? There is one option autooffset.reset. It accepts smallest or largest. Can someone please explain what is smallest and largest. Can autooffset.reset helps in getting data from old offset point instead of latest offset point? The consumers belong always to a group and, for each partition, the Zookeeper keeps track of the progress of that consumer group in the partition. To fetch from the beginning, you can delete all the data associated