apache-zookeeper

Zookeeper CLI failing - IOException Packet <len12343123123> is out of range

血红的双手。 提交于 2019-12-05 06:18:22
Running zookeeper 3.3.3. I have a znode that I am just trying to list, via the CLI, as in: ls /myznode/subznode This crashes with an IOException in org.apache.ClientCnxn$SendThread.readLength at line 710. Anyone seen this?? Someone suggested that maybe bad data is in the znode. Not sure if, or how... but I cannot delete it either, as it has something in it. I was able to work around this by increasing the max size of my listing call. I added the "-Djute.maxbuffer=50111000" to my zkCli.sh script so that it started the client using the following line: $JAVA "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "

Connecting an external Accumulo instance and java

孤街浪徒 提交于 2019-12-05 06:14:42
问题 I am trying to connect to a VM with Accumulo. The problem is, I can't get it hooked up in Java. I can see the webpage Apache throws up, but I can't get it to work with code. I think this is a lack of knowledge issue rather than a real problem, but I can't find documentation on it. All the examples use localhost as the zooServer name, this obviously doesn't work for me. So, here is my code: String instanceName = "accumulo-02" String zooServers = "192.168.56.5, accumulo-02.localdomain:9997"

Solr issue: ClusterState says we are the leader, but locally we don't think so

青春壹個敷衍的年華 提交于 2019-12-05 05:55:52
So today we run into a disturbing solr issue. After a restart of the whole cluster one of the shard stop being able to index/store documents. We had no hint about the issue until we started indexing (querying the server looks fine). The error is: 2014-05-19 18:36:20,707 ERROR o.a.s.u.p.DistributedUpdateProcessor [qtp406017988-19] ClusterState says we are the leader, but locally we don't think so 2014-05-19 18:36:20,709 ERROR o.a.s.c.SolrException [qtp406017988-19] org.apache.solr.common.SolrException: ClusterState says we are the leader (http://x.x.x.x:7070/solr/shard3_replica1), but locally

Kafka partitions out of sync on certain nodes

送分小仙女□ 提交于 2019-12-05 05:26:44
I'm running a Kafka cluster on 3 EC2 instances. Each instance runs kafka (0.11.0.1) and zookeeper (3.4). My topics are configured so that each has 20 partitions and ReplicationFactor of 3. Today I noticed that some partitions refuse to sync to all three nodes. Here's an example: bin/kafka-topics.sh --zookeeper "10.0.0.1:2181,10.0.0.2:2181,10.0.0.3:2181" --describe --topic prod-decline Topic:prod-decline PartitionCount:20 ReplicationFactor:3 Configs: Topic: prod-decline Partition: 0 Leader: 2 Replicas: 1,2,0 Isr: 2 Topic: prod-decline Partition: 1 Leader: 2 Replicas: 2,0,1 Isr: 2 Topic: prod

Cannot talk to ZooKeeper - Updates are disabled

拟墨画扇 提交于 2019-12-05 04:28:46
We are facing one peculiar issue with Zoo Keeper wherein the ZK loses connectivity with solr cloud all of a sudden and starts throwing an Exception which says "Cannot talk to ZooKeeper - Updates are disabled." Our application has 2 solr clusters setup separately on 2 different data centers. Each of these clusters has the same configurations and data and is expected to take the same incremental load. Application users need the changes made by them to reflect in the search with almost immediate effect and hence we run the incremental load every 10 seconds. Having said that the data updates

Killing node with __consumer_offsets leads to no message consumption at consumers

丶灬走出姿态 提交于 2019-12-05 02:53:00
问题 I have 3 node(nodes0,node1,node2) Kafka cluster(broker0, broker1, broker2) with replication factor 2 and Zookeeper(using zookeeper packaged with Kafka tar) running on a different node (node 4). I had started broker 0 after starting zookeper and then remaining nodes. It is seen in broker 0 logs that it is reading __consumer_offsets and seems they are stored on broker 0. Below are sample logs: Kafka Version: kafka_2.10-0.10.2.0 2017-06-30 10:50:47,381] INFO [GroupCoordinator 0]: Loading group

FAILED TO WRITE PID installing Zookeeper

别等时光非礼了梦想. 提交于 2019-12-05 02:52:25
I am new to Zookeeper and it has being a real issue to install it and run. I am not sure what is wrong in here but I will explain what I've being doing to make it more clear: 1.- I've followed the installation guide provided by Apache. This means download the Zookeeper distribution (stable release) extracted the file and moved into the home directory. 2.- As I am using Ubuntu 12.04 I've modified the .bashrc file including this: export ZOOKEEPER_INSTALL=/home/myusername/zookeeper-3.4.5 export PATH=$PATH:$ZOOKEEPER_INSTALL/bin 3.- Create a config file on conf/zoo.cfg tickTime=2000 dataDir=/var

error while starting kafka broker

╄→尐↘猪︶ㄣ 提交于 2019-12-05 02:16:40
I was able to successfully set up zookeeper and one kafka broker yesterday. Everything worked as expected. I shut down kafka (ctrl + c) and then zookeeper. Today I started zookeeper and when I started kafka ( bin/kafka-server-start.sh config/server0.properties ), I get the following error. I tried various remedies suggested (removing completely my kafka installation and doing it again from scratch). Still I get same error. [2016-09-28 16:15:55,895] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) java.lang.RuntimeException: A broker

Kafka : How to connect kafka-console-consumer to fetch remote broker topic content?

断了今生、忘了曾经 提交于 2019-12-04 22:49:47
问题 I have setup a kafka zookeeper and 3 brokers on one machine on ec2 with ports 9092..9094 and am trying to consume the topic content from another machine. The ports 2181 (zk), 9092, 9093 and 9094 (servers) are open to the consumer machine. I can even do a bin/kafka-topics.sh --describe --zookeeper 172.X.X.X:2181 --topic remotetopic which gives me Topic:remotetopic PartitionCount:1 ReplicationFactor:3 Configs: Topic: remotetopic Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1 Blockquote

Can producer find the additions and removals of brokers in Kafka 0.8?

怎甘沉沦 提交于 2019-12-04 21:34:38
We knowthat, in kafka 0.7, we can specify zk.connect for producer, so producer can find the additions and removals of broker. But in kafka 0.8, we can't specify zk.connect for producer. Can producer in kafka 0.8 find that? If not, the scalability of the system is not worse than the 0.7 version? You can still use a ZooKeeper client to retrieve the broker list: ZkClient zkClient = new ZkClient("localhost:2108", 4000, 6000, new BytesPushThroughSerializer()); List<String> brokerList = zkClient.getChildren("/brokers/ips"); According to that, you do not have to "hardcode" the broker list on client